00:00:00.000 Started by upstream project "autotest-per-patch" build number 130945 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.121 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.122 The recommended git tool is: git 00:00:00.122 using credential 00000000-0000-0000-0000-000000000002 00:00:00.123 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.171 Using shallow fetch with depth 1 00:00:00.171 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.171 > git --version # timeout=10 00:00:00.206 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.292 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.304 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.317 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.317 > git config core.sparsecheckout # timeout=10 00:00:04.329 > git read-tree -mu HEAD # timeout=10 00:00:04.346 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.364 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.364 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.560 [Pipeline] Start of Pipeline 00:00:04.576 [Pipeline] library 00:00:04.578 Loading library shm_lib@master 00:00:04.578 Library shm_lib@master is cached. Copying from home. 00:00:04.599 [Pipeline] node 00:00:04.616 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.618 [Pipeline] { 00:00:04.629 [Pipeline] catchError 00:00:04.631 [Pipeline] { 00:00:04.647 [Pipeline] wrap 00:00:04.657 [Pipeline] { 00:00:04.665 [Pipeline] stage 00:00:04.667 [Pipeline] { (Prologue) 00:00:04.685 [Pipeline] echo 00:00:04.687 Node: VM-host-SM9 00:00:04.693 [Pipeline] cleanWs 00:00:04.702 [WS-CLEANUP] Deleting project workspace... 00:00:04.702 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.707 [WS-CLEANUP] done 00:00:04.894 [Pipeline] setCustomBuildProperty 00:00:04.961 [Pipeline] httpRequest 00:00:05.327 [Pipeline] echo 00:00:05.328 Sorcerer 10.211.164.101 is alive 00:00:05.337 [Pipeline] retry 00:00:05.339 [Pipeline] { 00:00:05.350 [Pipeline] httpRequest 00:00:05.354 HttpMethod: GET 00:00:05.355 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.355 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.356 Response Code: HTTP/1.1 200 OK 00:00:05.357 Success: Status code 200 is in the accepted range: 200,404 00:00:05.357 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.313 [Pipeline] } 00:00:06.325 [Pipeline] // retry 00:00:06.330 [Pipeline] sh 00:00:06.616 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.631 [Pipeline] httpRequest 00:00:06.963 [Pipeline] echo 00:00:06.964 Sorcerer 10.211.164.101 is alive 00:00:06.971 [Pipeline] retry 00:00:06.974 [Pipeline] { 00:00:06.984 [Pipeline] httpRequest 00:00:06.987 HttpMethod: GET 00:00:06.987 URL: http://10.211.164.101/packages/spdk_1c2942c866c4eadd7e87faeedd607eae6985084c.tar.gz 00:00:06.988 Sending request to url: http://10.211.164.101/packages/spdk_1c2942c866c4eadd7e87faeedd607eae6985084c.tar.gz 00:00:06.999 Response Code: HTTP/1.1 200 OK 00:00:07.000 Success: Status code 200 is in the accepted range: 200,404 00:00:07.000 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_1c2942c866c4eadd7e87faeedd607eae6985084c.tar.gz 00:00:28.761 [Pipeline] } 00:00:28.775 [Pipeline] // retry 00:00:28.782 [Pipeline] sh 00:00:29.060 + tar --no-same-owner -xf spdk_1c2942c866c4eadd7e87faeedd607eae6985084c.tar.gz 00:00:31.606 [Pipeline] sh 00:00:31.886 + git -C spdk log --oneline -n5 00:00:31.886 1c2942c86 module/vfu_device/vfu_virtio_rpc: log fixed 00:00:31.886 92108e0a2 fsdev/aio: add support for null IOs 00:00:31.886 dcdab59d3 lib/reduce: Check return code of read superblock 00:00:31.886 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:00:31.886 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:00:31.904 [Pipeline] writeFile 00:00:31.918 [Pipeline] sh 00:00:32.200 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:32.211 [Pipeline] sh 00:00:32.490 + cat autorun-spdk.conf 00:00:32.490 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.490 SPDK_TEST_NVME=1 00:00:32.490 SPDK_TEST_FTL=1 00:00:32.490 SPDK_TEST_ISAL=1 00:00:32.490 SPDK_RUN_ASAN=1 00:00:32.490 SPDK_RUN_UBSAN=1 00:00:32.490 SPDK_TEST_XNVME=1 00:00:32.490 SPDK_TEST_NVME_FDP=1 00:00:32.490 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.497 RUN_NIGHTLY=0 00:00:32.498 [Pipeline] } 00:00:32.511 [Pipeline] // stage 00:00:32.525 [Pipeline] stage 00:00:32.526 [Pipeline] { (Run VM) 00:00:32.538 [Pipeline] sh 00:00:32.817 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.817 + echo 'Start stage prepare_nvme.sh' 00:00:32.817 Start stage prepare_nvme.sh 00:00:32.817 + [[ -n 5 ]] 00:00:32.817 + disk_prefix=ex5 00:00:32.817 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:32.817 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:32.817 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:32.817 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.817 ++ SPDK_TEST_NVME=1 00:00:32.817 ++ SPDK_TEST_FTL=1 00:00:32.817 ++ SPDK_TEST_ISAL=1 00:00:32.817 ++ SPDK_RUN_ASAN=1 00:00:32.817 ++ SPDK_RUN_UBSAN=1 00:00:32.817 ++ SPDK_TEST_XNVME=1 00:00:32.817 ++ SPDK_TEST_NVME_FDP=1 00:00:32.817 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.817 ++ RUN_NIGHTLY=0 00:00:32.817 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:32.817 + nvme_files=() 00:00:32.817 + declare -A nvme_files 00:00:32.817 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.817 + nvme_files['nvme.img']=5G 00:00:32.817 + nvme_files['nvme-cmb.img']=5G 00:00:32.817 + nvme_files['nvme-multi0.img']=4G 00:00:32.817 + nvme_files['nvme-multi1.img']=4G 00:00:32.817 + nvme_files['nvme-multi2.img']=4G 00:00:32.817 + nvme_files['nvme-openstack.img']=8G 00:00:32.817 + nvme_files['nvme-zns.img']=5G 00:00:32.817 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.817 + (( SPDK_TEST_FTL == 1 )) 00:00:32.817 + nvme_files["nvme-ftl.img"]=6G 00:00:32.817 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.817 + nvme_files["nvme-fdp.img"]=1G 00:00:32.817 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.817 + for nvme in "${!nvme_files[@]}" 00:00:32.817 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:32.817 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.817 + for nvme in "${!nvme_files[@]}" 00:00:32.817 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:00:33.076 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:33.076 + for nvme in "${!nvme_files[@]}" 00:00:33.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:33.076 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.076 + for nvme in "${!nvme_files[@]}" 00:00:33.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:33.076 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.076 + for nvme in "${!nvme_files[@]}" 00:00:33.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:00:33.593 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:33.593 + for nvme in "${!nvme_files[@]}" 00:00:33.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:33.851 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.851 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:33.851 + echo 'End stage prepare_nvme.sh' 00:00:33.851 End stage prepare_nvme.sh 00:00:33.862 [Pipeline] sh 00:00:34.174 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:34.174 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:34.174 00:00:34.174 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:34.174 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:34.174 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:34.174 HELP=0 00:00:34.174 DRY_RUN=0 00:00:34.174 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:00:34.174 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:34.174 NVME_AUTO_CREATE=0 00:00:34.174 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:00:34.174 NVME_CMB=,,,, 00:00:34.174 NVME_PMR=,,,, 00:00:34.174 NVME_ZNS=,,,, 00:00:34.174 NVME_MS=true,,,, 00:00:34.174 NVME_FDP=,,,on, 00:00:34.174 SPDK_VAGRANT_DISTRO=fedora39 00:00:34.174 SPDK_VAGRANT_VMCPU=10 00:00:34.174 SPDK_VAGRANT_VMRAM=12288 00:00:34.174 SPDK_VAGRANT_PROVIDER=libvirt 00:00:34.174 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:34.174 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:34.174 SPDK_OPENSTACK_NETWORK=0 00:00:34.174 VAGRANT_PACKAGE_BOX=0 00:00:34.174 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:34.174 FORCE_DISTRO=true 00:00:34.174 VAGRANT_BOX_VERSION= 00:00:34.174 EXTRA_VAGRANTFILES= 00:00:34.174 NIC_MODEL=e1000 00:00:34.174 00:00:34.174 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:34.174 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:37.460 Bringing machine 'default' up with 'libvirt' provider... 00:00:37.460 ==> default: Creating image (snapshot of base box volume). 00:00:37.719 ==> default: Creating domain with the following settings... 00:00:37.719 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728459639_470ba5746524b86ad0b1 00:00:37.719 ==> default: -- Domain type: kvm 00:00:37.719 ==> default: -- Cpus: 10 00:00:37.719 ==> default: -- Feature: acpi 00:00:37.719 ==> default: -- Feature: apic 00:00:37.719 ==> default: -- Feature: pae 00:00:37.719 ==> default: -- Memory: 12288M 00:00:37.719 ==> default: -- Memory Backing: hugepages: 00:00:37.719 ==> default: -- Management MAC: 00:00:37.719 ==> default: -- Loader: 00:00:37.719 ==> default: -- Nvram: 00:00:37.719 ==> default: -- Base box: spdk/fedora39 00:00:37.719 ==> default: -- Storage pool: default 00:00:37.719 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728459639_470ba5746524b86ad0b1.img (20G) 00:00:37.719 ==> default: -- Volume Cache: default 00:00:37.719 ==> default: -- Kernel: 00:00:37.719 ==> default: -- Initrd: 00:00:37.719 ==> default: -- Graphics Type: vnc 00:00:37.719 ==> default: -- Graphics Port: -1 00:00:37.719 ==> default: -- Graphics IP: 127.0.0.1 00:00:37.719 ==> default: -- Graphics Password: Not defined 00:00:37.719 ==> default: -- Video Type: cirrus 00:00:37.719 ==> default: -- Video VRAM: 9216 00:00:37.719 ==> default: -- Sound Type: 00:00:37.719 ==> default: -- Keymap: en-us 00:00:37.719 ==> default: -- TPM Path: 00:00:37.719 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:37.719 ==> default: -- Command line args: 00:00:37.719 ==> default: -> value=-device, 00:00:37.719 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:37.720 ==> default: -> value=-drive, 00:00:37.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:37.720 ==> default: -> value=-drive, 00:00:37.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:37.720 ==> default: -> value=-drive, 00:00:37.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.720 ==> default: -> value=-drive, 00:00:37.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.720 ==> default: -> value=-drive, 00:00:37.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:37.720 ==> default: -> value=-drive, 00:00:37.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:37.720 ==> default: -> value=-device, 00:00:37.720 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.720 ==> default: Creating shared folders metadata... 00:00:37.720 ==> default: Starting domain. 00:00:39.097 ==> default: Waiting for domain to get an IP address... 00:00:57.184 ==> default: Waiting for SSH to become available... 00:00:57.184 ==> default: Configuring and enabling network interfaces... 00:01:00.470 default: SSH address: 192.168.121.123:22 00:01:00.470 default: SSH username: vagrant 00:01:00.470 default: SSH auth method: private key 00:01:02.372 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.518 ==> default: Mounting SSHFS shared folder... 00:01:11.482 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.482 ==> default: Checking Mount.. 00:01:12.857 ==> default: Folder Successfully Mounted! 00:01:12.857 ==> default: Running provisioner: file... 00:01:13.792 default: ~/.gitconfig => .gitconfig 00:01:14.050 00:01:14.050 SUCCESS! 00:01:14.050 00:01:14.050 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.050 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.050 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:14.050 00:01:14.060 [Pipeline] } 00:01:14.085 [Pipeline] // stage 00:01:14.093 [Pipeline] dir 00:01:14.094 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:14.095 [Pipeline] { 00:01:14.105 [Pipeline] catchError 00:01:14.106 [Pipeline] { 00:01:14.114 [Pipeline] sh 00:01:14.388 + vagrant ssh-config --host vagrant 00:01:14.388 + sed -ne /^Host/,$p 00:01:14.388 + tee ssh_conf 00:01:17.672 Host vagrant 00:01:17.672 HostName 192.168.121.123 00:01:17.672 User vagrant 00:01:17.672 Port 22 00:01:17.672 UserKnownHostsFile /dev/null 00:01:17.672 StrictHostKeyChecking no 00:01:17.672 PasswordAuthentication no 00:01:17.672 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.672 IdentitiesOnly yes 00:01:17.672 LogLevel FATAL 00:01:17.672 ForwardAgent yes 00:01:17.672 ForwardX11 yes 00:01:17.672 00:01:17.690 [Pipeline] withEnv 00:01:17.696 [Pipeline] { 00:01:17.711 [Pipeline] sh 00:01:17.989 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.989 source /etc/os-release 00:01:17.989 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.989 # Minimal, systemd-like check. 00:01:17.989 if [[ -e /.dockerenv ]]; then 00:01:17.989 # Clear garbage from the node's name: 00:01:17.989 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.989 # $HOSTNAME is the actual container id 00:01:17.989 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.989 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.989 # We can assume this is a mount from a host where container is running, 00:01:17.989 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.989 container="$(< /etc/hostname) ($agent)" 00:01:17.989 else 00:01:17.989 # Fallback 00:01:17.989 container=$agent 00:01:17.989 fi 00:01:17.989 fi 00:01:17.989 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.989 00:01:18.000 [Pipeline] } 00:01:18.013 [Pipeline] // withEnv 00:01:18.019 [Pipeline] setCustomBuildProperty 00:01:18.030 [Pipeline] stage 00:01:18.032 [Pipeline] { (Tests) 00:01:18.046 [Pipeline] sh 00:01:18.326 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.597 [Pipeline] sh 00:01:18.932 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:18.947 [Pipeline] timeout 00:01:18.948 Timeout set to expire in 50 min 00:01:18.949 [Pipeline] { 00:01:18.960 [Pipeline] sh 00:01:19.241 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.810 HEAD is now at 1c2942c86 module/vfu_device/vfu_virtio_rpc: log fixed 00:01:19.821 [Pipeline] sh 00:01:20.099 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.370 [Pipeline] sh 00:01:20.649 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:20.923 [Pipeline] sh 00:01:21.201 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:21.460 ++ readlink -f spdk_repo 00:01:21.460 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.460 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.460 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.460 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.460 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.460 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.460 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.460 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:21.460 + cd /home/vagrant/spdk_repo 00:01:21.460 + source /etc/os-release 00:01:21.460 ++ NAME='Fedora Linux' 00:01:21.460 ++ VERSION='39 (Cloud Edition)' 00:01:21.460 ++ ID=fedora 00:01:21.460 ++ VERSION_ID=39 00:01:21.460 ++ VERSION_CODENAME= 00:01:21.460 ++ PLATFORM_ID=platform:f39 00:01:21.460 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.460 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.460 ++ LOGO=fedora-logo-icon 00:01:21.460 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.460 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.460 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.460 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.460 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.460 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.460 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.460 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.460 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.460 ++ SUPPORT_END=2024-11-12 00:01:21.460 ++ VARIANT='Cloud Edition' 00:01:21.460 ++ VARIANT_ID=cloud 00:01:21.460 + uname -a 00:01:21.460 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.460 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:21.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:21.976 Hugepages 00:01:21.976 node hugesize free / total 00:01:21.976 node0 1048576kB 0 / 0 00:01:21.976 node0 2048kB 0 / 0 00:01:21.976 00:01:21.976 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.976 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.235 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:22.235 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:22.235 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:22.235 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:22.235 + rm -f /tmp/spdk-ld-path 00:01:22.235 + source autorun-spdk.conf 00:01:22.235 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.235 ++ SPDK_TEST_NVME=1 00:01:22.235 ++ SPDK_TEST_FTL=1 00:01:22.235 ++ SPDK_TEST_ISAL=1 00:01:22.235 ++ SPDK_RUN_ASAN=1 00:01:22.235 ++ SPDK_RUN_UBSAN=1 00:01:22.235 ++ SPDK_TEST_XNVME=1 00:01:22.235 ++ SPDK_TEST_NVME_FDP=1 00:01:22.235 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.235 ++ RUN_NIGHTLY=0 00:01:22.235 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.235 + [[ -n '' ]] 00:01:22.235 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.235 + for M in /var/spdk/build-*-manifest.txt 00:01:22.235 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.235 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.235 + for M in /var/spdk/build-*-manifest.txt 00:01:22.235 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.235 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.235 + for M in /var/spdk/build-*-manifest.txt 00:01:22.235 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.235 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.235 ++ uname 00:01:22.235 + [[ Linux == \L\i\n\u\x ]] 00:01:22.235 + sudo dmesg -T 00:01:22.235 + sudo dmesg --clear 00:01:22.235 + dmesg_pid=5299 00:01:22.235 + sudo dmesg -Tw 00:01:22.235 + [[ Fedora Linux == FreeBSD ]] 00:01:22.235 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.235 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.235 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.235 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.235 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.235 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.235 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.235 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.235 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.235 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.235 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.235 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.235 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.235 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.235 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.235 Test configuration: 00:01:22.236 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.236 SPDK_TEST_NVME=1 00:01:22.236 SPDK_TEST_FTL=1 00:01:22.236 SPDK_TEST_ISAL=1 00:01:22.236 SPDK_RUN_ASAN=1 00:01:22.236 SPDK_RUN_UBSAN=1 00:01:22.236 SPDK_TEST_XNVME=1 00:01:22.236 SPDK_TEST_NVME_FDP=1 00:01:22.236 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.494 RUN_NIGHTLY=0 07:41:24 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:22.494 07:41:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.494 07:41:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.494 07:41:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.494 07:41:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.494 07:41:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.494 07:41:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.494 07:41:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.495 07:41:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.495 07:41:24 -- paths/export.sh@5 -- $ export PATH 00:01:22.495 07:41:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.495 07:41:24 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.495 07:41:24 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:22.495 07:41:24 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728459684.XXXXXX 00:01:22.495 07:41:24 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728459684.kn4HW6 00:01:22.495 07:41:24 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:22.495 07:41:24 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:22.495 07:41:24 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.495 07:41:24 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.495 07:41:24 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.495 07:41:24 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:22.495 07:41:24 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:22.495 07:41:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.495 07:41:24 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:22.495 07:41:24 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:22.495 07:41:24 -- pm/common@17 -- $ local monitor 00:01:22.495 07:41:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.495 07:41:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.495 07:41:24 -- pm/common@25 -- $ sleep 1 00:01:22.495 07:41:24 -- pm/common@21 -- $ date +%s 00:01:22.495 07:41:24 -- pm/common@21 -- $ date +%s 00:01:22.495 07:41:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728459684 00:01:22.495 07:41:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728459684 00:01:22.495 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728459684_collect-cpu-load.pm.log 00:01:22.495 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728459684_collect-vmstat.pm.log 00:01:23.431 07:41:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:23.431 07:41:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.431 07:41:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.431 07:41:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:23.431 07:41:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.431 Wed Oct 9 07:41:25 AM UTC 2024 00:01:23.431 07:41:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.431 v25.01-pre-42-g1c2942c86 00:01:23.431 07:41:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:23.431 07:41:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:23.431 07:41:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:23.431 07:41:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:23.431 07:41:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.431 ************************************ 00:01:23.431 START TEST asan 00:01:23.431 ************************************ 00:01:23.431 using asan 00:01:23.431 07:41:25 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:23.431 00:01:23.431 real 0m0.000s 00:01:23.431 user 0m0.000s 00:01:23.431 sys 0m0.000s 00:01:23.431 07:41:25 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:23.431 ************************************ 00:01:23.431 END TEST asan 00:01:23.431 ************************************ 00:01:23.431 07:41:25 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.431 07:41:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.431 07:41:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.431 07:41:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:23.431 07:41:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:23.431 07:41:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.431 ************************************ 00:01:23.431 START TEST ubsan 00:01:23.431 ************************************ 00:01:23.431 using ubsan 00:01:23.431 07:41:25 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:23.431 00:01:23.431 real 0m0.000s 00:01:23.431 user 0m0.000s 00:01:23.431 sys 0m0.000s 00:01:23.431 07:41:25 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:23.431 ************************************ 00:01:23.431 END TEST ubsan 00:01:23.431 ************************************ 00:01:23.431 07:41:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.690 07:41:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.690 07:41:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.690 07:41:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.690 07:41:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.690 07:41:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.690 07:41:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.690 07:41:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.690 07:41:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.690 07:41:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:23.690 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:23.690 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.258 Using 'verbs' RDMA provider 00:01:37.420 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:52.297 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:52.297 Creating mk/config.mk...done. 00:01:52.297 Creating mk/cc.flags.mk...done. 00:01:52.297 Type 'make' to build. 00:01:52.297 07:41:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:52.297 07:41:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:52.297 07:41:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.297 07:41:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.297 ************************************ 00:01:52.297 START TEST make 00:01:52.297 ************************************ 00:01:52.297 07:41:52 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:52.297 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:52.297 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:52.297 meson setup builddir \ 00:01:52.297 -Dwith-libaio=enabled \ 00:01:52.297 -Dwith-liburing=enabled \ 00:01:52.297 -Dwith-libvfn=disabled \ 00:01:52.297 -Dwith-spdk=false && \ 00:01:52.297 meson compile -C builddir && \ 00:01:52.297 cd -) 00:01:52.297 make[1]: Nothing to be done for 'all'. 00:01:54.196 The Meson build system 00:01:54.196 Version: 1.5.0 00:01:54.196 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:54.196 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:54.196 Build type: native build 00:01:54.196 Project name: xnvme 00:01:54.196 Project version: 0.7.3 00:01:54.196 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.196 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.196 Host machine cpu family: x86_64 00:01:54.196 Host machine cpu: x86_64 00:01:54.196 Message: host_machine.system: linux 00:01:54.196 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:54.196 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:54.196 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:54.196 Run-time dependency threads found: YES 00:01:54.196 Has header "setupapi.h" : NO 00:01:54.196 Has header "linux/blkzoned.h" : YES 00:01:54.196 Has header "linux/blkzoned.h" : YES (cached) 00:01:54.196 Has header "libaio.h" : YES 00:01:54.196 Library aio found: YES 00:01:54.196 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.196 Run-time dependency liburing found: YES 2.2 00:01:54.196 Dependency libvfn skipped: feature with-libvfn disabled 00:01:54.196 Run-time dependency appleframeworks found: NO (tried framework) 00:01:54.196 Run-time dependency appleframeworks found: NO (tried framework) 00:01:54.196 Configuring xnvme_config.h using configuration 00:01:54.196 Configuring xnvme.spec using configuration 00:01:54.196 Run-time dependency bash-completion found: YES 2.11 00:01:54.196 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:54.196 Program cp found: YES (/usr/bin/cp) 00:01:54.196 Has header "winsock2.h" : NO 00:01:54.196 Has header "dbghelp.h" : NO 00:01:54.196 Library rpcrt4 found: NO 00:01:54.196 Library rt found: YES 00:01:54.196 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:54.196 Found CMake: /usr/bin/cmake (3.27.7) 00:01:54.196 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:54.196 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:54.196 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:54.196 Build targets in project: 32 00:01:54.196 00:01:54.196 xnvme 0.7.3 00:01:54.196 00:01:54.196 User defined options 00:01:54.196 with-libaio : enabled 00:01:54.196 with-liburing: enabled 00:01:54.196 with-libvfn : disabled 00:01:54.196 with-spdk : false 00:01:54.196 00:01:54.196 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.454 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:54.712 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:54.712 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:54.712 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:54.712 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:54.712 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:54.712 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:54.712 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:54.712 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:54.712 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:54.712 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:54.712 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:54.712 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:54.712 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:54.970 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:54.970 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:54.970 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:54.970 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:54.970 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:54.970 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:54.970 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:54.970 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:54.970 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:54.970 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:54.970 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:54.970 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:54.970 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:54.970 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:54.970 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:54.970 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:54.970 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:54.970 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:54.970 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:54.970 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:54.970 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:55.228 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:55.228 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:55.228 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:55.228 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:55.228 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:55.228 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:55.228 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:55.228 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:55.228 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:55.228 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:55.228 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:55.228 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:55.228 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:55.228 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:55.228 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:55.228 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:55.228 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:55.228 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:55.228 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:55.228 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:55.228 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:55.228 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:55.228 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:55.228 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:55.228 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:55.486 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:55.486 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:55.486 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:55.486 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:55.486 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:55.486 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:55.486 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:55.486 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:55.486 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:55.486 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:55.486 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:55.744 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:55.744 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:55.744 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:55.744 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:55.744 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:55.744 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:55.744 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:55.744 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:55.744 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:55.744 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:55.744 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:55.744 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:55.744 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:55.744 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:55.744 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:55.744 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:56.002 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:56.002 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:56.002 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:56.002 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:56.002 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:56.002 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:56.002 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:56.002 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:56.002 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:56.002 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:56.002 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:56.002 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:56.002 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:56.002 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:56.002 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:56.002 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:56.002 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:56.002 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:56.002 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:56.002 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:56.002 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:56.002 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:56.002 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:56.002 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:56.002 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:56.002 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:56.002 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:56.002 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:56.002 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:56.002 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:56.002 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:56.260 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:56.260 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:56.260 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:56.260 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:56.260 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:56.260 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:56.260 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:56.260 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:56.260 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:56.260 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:56.260 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:56.260 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:56.260 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:56.260 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:56.260 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:56.518 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:56.518 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:56.518 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:56.518 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:56.518 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:56.518 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:56.518 [139/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:56.518 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:56.518 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:56.518 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:56.518 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:56.518 [144/203] Linking target lib/libxnvme.so 00:01:56.518 [145/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:56.776 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:56.776 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:56.776 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:56.776 [149/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:56.776 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:56.776 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:56.776 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:56.776 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:56.776 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:56.776 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:56.776 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:56.776 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:56.776 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:56.776 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:57.034 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:57.034 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:57.034 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:57.034 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:57.034 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:57.034 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:57.034 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:57.034 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:57.034 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:57.034 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:57.034 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:57.292 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:57.292 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:57.292 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:57.292 [174/203] Linking static target lib/libxnvme.a 00:01:57.292 [175/203] Linking target tests/xnvme_tests_buf 00:01:57.292 [176/203] Linking target tests/xnvme_tests_ioworker 00:01:57.292 [177/203] Linking target tests/xnvme_tests_znd_state 00:01:57.292 [178/203] Linking target tests/xnvme_tests_async_intf 00:01:57.292 [179/203] Linking target tests/xnvme_tests_cli 00:01:57.292 [180/203] Linking target tests/xnvme_tests_enum 00:01:57.292 [181/203] Linking target tests/xnvme_tests_lblk 00:01:57.292 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:01:57.292 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:57.549 [184/203] Linking target tests/xnvme_tests_znd_append 00:01:57.549 [185/203] Linking target tests/xnvme_tests_scc 00:01:57.549 [186/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:57.549 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:57.549 [188/203] Linking target tests/xnvme_tests_kvs 00:01:57.549 [189/203] Linking target tools/kvs 00:01:57.549 [190/203] Linking target tools/xnvme_file 00:01:57.549 [191/203] Linking target tests/xnvme_tests_map 00:01:57.549 [192/203] Linking target tools/xdd 00:01:57.549 [193/203] Linking target tools/lblk 00:01:57.549 [194/203] Linking target tools/zoned 00:01:57.549 [195/203] Linking target examples/xnvme_hello 00:01:57.549 [196/203] Linking target tools/xnvme 00:01:57.549 [197/203] Linking target examples/xnvme_enum 00:01:57.549 [198/203] Linking target examples/xnvme_dev 00:01:57.549 [199/203] Linking target examples/zoned_io_async 00:01:57.549 [200/203] Linking target examples/xnvme_io_async 00:01:57.549 [201/203] Linking target examples/zoned_io_sync 00:01:57.549 [202/203] Linking target examples/xnvme_single_async 00:01:57.549 [203/203] Linking target examples/xnvme_single_sync 00:01:57.549 INFO: autodetecting backend as ninja 00:01:57.549 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:57.549 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:05.652 The Meson build system 00:02:05.652 Version: 1.5.0 00:02:05.652 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.652 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.652 Build type: native build 00:02:05.652 Program cat found: YES (/usr/bin/cat) 00:02:05.652 Project name: DPDK 00:02:05.652 Project version: 24.03.0 00:02:05.652 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.652 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.652 Host machine cpu family: x86_64 00:02:05.653 Host machine cpu: x86_64 00:02:05.653 Message: ## Building in Developer Mode ## 00:02:05.653 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.653 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.653 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.653 Program python3 found: YES (/usr/bin/python3) 00:02:05.653 Program cat found: YES (/usr/bin/cat) 00:02:05.653 Compiler for C supports arguments -march=native: YES 00:02:05.653 Checking for size of "void *" : 8 00:02:05.653 Checking for size of "void *" : 8 (cached) 00:02:05.653 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.653 Library m found: YES 00:02:05.653 Library numa found: YES 00:02:05.653 Has header "numaif.h" : YES 00:02:05.653 Library fdt found: NO 00:02:05.653 Library execinfo found: NO 00:02:05.653 Has header "execinfo.h" : YES 00:02:05.653 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.653 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.653 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.653 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.653 Run-time dependency openssl found: YES 3.1.1 00:02:05.653 Run-time dependency libpcap found: YES 1.10.4 00:02:05.653 Has header "pcap.h" with dependency libpcap: YES 00:02:05.653 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.653 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.653 Compiler for C supports arguments -Wformat: YES 00:02:05.653 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.653 Compiler for C supports arguments -Wformat-security: NO 00:02:05.653 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.653 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.653 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.653 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.653 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.653 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.653 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.653 Compiler for C supports arguments -Wundef: YES 00:02:05.653 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.653 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.653 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.653 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.653 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.653 Program objdump found: YES (/usr/bin/objdump) 00:02:05.653 Compiler for C supports arguments -mavx512f: YES 00:02:05.653 Checking if "AVX512 checking" compiles: YES 00:02:05.653 Fetching value of define "__SSE4_2__" : 1 00:02:05.653 Fetching value of define "__AES__" : 1 00:02:05.653 Fetching value of define "__AVX__" : 1 00:02:05.653 Fetching value of define "__AVX2__" : 1 00:02:05.653 Fetching value of define "__AVX512BW__" : (undefined) 00:02:05.653 Fetching value of define "__AVX512CD__" : (undefined) 00:02:05.653 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:05.653 Fetching value of define "__AVX512F__" : (undefined) 00:02:05.653 Fetching value of define "__AVX512VL__" : (undefined) 00:02:05.653 Fetching value of define "__PCLMUL__" : 1 00:02:05.653 Fetching value of define "__RDRND__" : 1 00:02:05.653 Fetching value of define "__RDSEED__" : 1 00:02:05.653 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.653 Fetching value of define "__znver1__" : (undefined) 00:02:05.653 Fetching value of define "__znver2__" : (undefined) 00:02:05.653 Fetching value of define "__znver3__" : (undefined) 00:02:05.653 Fetching value of define "__znver4__" : (undefined) 00:02:05.653 Library asan found: YES 00:02:05.653 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.653 Message: lib/log: Defining dependency "log" 00:02:05.653 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.653 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.653 Library rt found: YES 00:02:05.653 Checking for function "getentropy" : NO 00:02:05.653 Message: lib/eal: Defining dependency "eal" 00:02:05.653 Message: lib/ring: Defining dependency "ring" 00:02:05.653 Message: lib/rcu: Defining dependency "rcu" 00:02:05.653 Message: lib/mempool: Defining dependency "mempool" 00:02:05.653 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.653 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.653 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:05.653 Compiler for C supports arguments -mpclmul: YES 00:02:05.653 Compiler for C supports arguments -maes: YES 00:02:05.653 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.653 Compiler for C supports arguments -mavx512bw: YES 00:02:05.653 Compiler for C supports arguments -mavx512dq: YES 00:02:05.653 Compiler for C supports arguments -mavx512vl: YES 00:02:05.653 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.653 Compiler for C supports arguments -mavx2: YES 00:02:05.653 Compiler for C supports arguments -mavx: YES 00:02:05.653 Message: lib/net: Defining dependency "net" 00:02:05.653 Message: lib/meter: Defining dependency "meter" 00:02:05.653 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.653 Message: lib/pci: Defining dependency "pci" 00:02:05.653 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.653 Message: lib/hash: Defining dependency "hash" 00:02:05.653 Message: lib/timer: Defining dependency "timer" 00:02:05.653 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.653 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.653 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.653 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.653 Message: lib/power: Defining dependency "power" 00:02:05.653 Message: lib/reorder: Defining dependency "reorder" 00:02:05.653 Message: lib/security: Defining dependency "security" 00:02:05.653 Has header "linux/userfaultfd.h" : YES 00:02:05.653 Has header "linux/vduse.h" : YES 00:02:05.653 Message: lib/vhost: Defining dependency "vhost" 00:02:05.653 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.653 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.653 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.653 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.653 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.653 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.653 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.653 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.653 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.653 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.653 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.653 Configuring doxy-api-html.conf using configuration 00:02:05.653 Configuring doxy-api-man.conf using configuration 00:02:05.653 Program mandb found: YES (/usr/bin/mandb) 00:02:05.653 Program sphinx-build found: NO 00:02:05.653 Configuring rte_build_config.h using configuration 00:02:05.653 Message: 00:02:05.653 ================= 00:02:05.653 Applications Enabled 00:02:05.653 ================= 00:02:05.653 00:02:05.653 apps: 00:02:05.653 00:02:05.653 00:02:05.653 Message: 00:02:05.653 ================= 00:02:05.653 Libraries Enabled 00:02:05.653 ================= 00:02:05.653 00:02:05.653 libs: 00:02:05.653 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.653 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.653 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.653 00:02:05.653 Message: 00:02:05.653 =============== 00:02:05.653 Drivers Enabled 00:02:05.653 =============== 00:02:05.653 00:02:05.653 common: 00:02:05.653 00:02:05.653 bus: 00:02:05.653 pci, vdev, 00:02:05.653 mempool: 00:02:05.653 ring, 00:02:05.653 dma: 00:02:05.653 00:02:05.653 net: 00:02:05.653 00:02:05.653 crypto: 00:02:05.653 00:02:05.653 compress: 00:02:05.653 00:02:05.653 vdpa: 00:02:05.653 00:02:05.653 00:02:05.653 Message: 00:02:05.653 ================= 00:02:05.653 Content Skipped 00:02:05.653 ================= 00:02:05.653 00:02:05.653 apps: 00:02:05.653 dumpcap: explicitly disabled via build config 00:02:05.653 graph: explicitly disabled via build config 00:02:05.653 pdump: explicitly disabled via build config 00:02:05.653 proc-info: explicitly disabled via build config 00:02:05.653 test-acl: explicitly disabled via build config 00:02:05.653 test-bbdev: explicitly disabled via build config 00:02:05.653 test-cmdline: explicitly disabled via build config 00:02:05.653 test-compress-perf: explicitly disabled via build config 00:02:05.653 test-crypto-perf: explicitly disabled via build config 00:02:05.653 test-dma-perf: explicitly disabled via build config 00:02:05.653 test-eventdev: explicitly disabled via build config 00:02:05.653 test-fib: explicitly disabled via build config 00:02:05.653 test-flow-perf: explicitly disabled via build config 00:02:05.653 test-gpudev: explicitly disabled via build config 00:02:05.653 test-mldev: explicitly disabled via build config 00:02:05.653 test-pipeline: explicitly disabled via build config 00:02:05.653 test-pmd: explicitly disabled via build config 00:02:05.653 test-regex: explicitly disabled via build config 00:02:05.653 test-sad: explicitly disabled via build config 00:02:05.653 test-security-perf: explicitly disabled via build config 00:02:05.653 00:02:05.653 libs: 00:02:05.653 argparse: explicitly disabled via build config 00:02:05.653 metrics: explicitly disabled via build config 00:02:05.653 acl: explicitly disabled via build config 00:02:05.653 bbdev: explicitly disabled via build config 00:02:05.653 bitratestats: explicitly disabled via build config 00:02:05.653 bpf: explicitly disabled via build config 00:02:05.653 cfgfile: explicitly disabled via build config 00:02:05.653 distributor: explicitly disabled via build config 00:02:05.653 efd: explicitly disabled via build config 00:02:05.653 eventdev: explicitly disabled via build config 00:02:05.653 dispatcher: explicitly disabled via build config 00:02:05.653 gpudev: explicitly disabled via build config 00:02:05.653 gro: explicitly disabled via build config 00:02:05.653 gso: explicitly disabled via build config 00:02:05.653 ip_frag: explicitly disabled via build config 00:02:05.653 jobstats: explicitly disabled via build config 00:02:05.654 latencystats: explicitly disabled via build config 00:02:05.654 lpm: explicitly disabled via build config 00:02:05.654 member: explicitly disabled via build config 00:02:05.654 pcapng: explicitly disabled via build config 00:02:05.654 rawdev: explicitly disabled via build config 00:02:05.654 regexdev: explicitly disabled via build config 00:02:05.654 mldev: explicitly disabled via build config 00:02:05.654 rib: explicitly disabled via build config 00:02:05.654 sched: explicitly disabled via build config 00:02:05.654 stack: explicitly disabled via build config 00:02:05.654 ipsec: explicitly disabled via build config 00:02:05.654 pdcp: explicitly disabled via build config 00:02:05.654 fib: explicitly disabled via build config 00:02:05.654 port: explicitly disabled via build config 00:02:05.654 pdump: explicitly disabled via build config 00:02:05.654 table: explicitly disabled via build config 00:02:05.654 pipeline: explicitly disabled via build config 00:02:05.654 graph: explicitly disabled via build config 00:02:05.654 node: explicitly disabled via build config 00:02:05.654 00:02:05.654 drivers: 00:02:05.654 common/cpt: not in enabled drivers build config 00:02:05.654 common/dpaax: not in enabled drivers build config 00:02:05.654 common/iavf: not in enabled drivers build config 00:02:05.654 common/idpf: not in enabled drivers build config 00:02:05.654 common/ionic: not in enabled drivers build config 00:02:05.654 common/mvep: not in enabled drivers build config 00:02:05.654 common/octeontx: not in enabled drivers build config 00:02:05.654 bus/auxiliary: not in enabled drivers build config 00:02:05.654 bus/cdx: not in enabled drivers build config 00:02:05.654 bus/dpaa: not in enabled drivers build config 00:02:05.654 bus/fslmc: not in enabled drivers build config 00:02:05.654 bus/ifpga: not in enabled drivers build config 00:02:05.654 bus/platform: not in enabled drivers build config 00:02:05.654 bus/uacce: not in enabled drivers build config 00:02:05.654 bus/vmbus: not in enabled drivers build config 00:02:05.654 common/cnxk: not in enabled drivers build config 00:02:05.654 common/mlx5: not in enabled drivers build config 00:02:05.654 common/nfp: not in enabled drivers build config 00:02:05.654 common/nitrox: not in enabled drivers build config 00:02:05.654 common/qat: not in enabled drivers build config 00:02:05.654 common/sfc_efx: not in enabled drivers build config 00:02:05.654 mempool/bucket: not in enabled drivers build config 00:02:05.654 mempool/cnxk: not in enabled drivers build config 00:02:05.654 mempool/dpaa: not in enabled drivers build config 00:02:05.654 mempool/dpaa2: not in enabled drivers build config 00:02:05.654 mempool/octeontx: not in enabled drivers build config 00:02:05.654 mempool/stack: not in enabled drivers build config 00:02:05.654 dma/cnxk: not in enabled drivers build config 00:02:05.654 dma/dpaa: not in enabled drivers build config 00:02:05.654 dma/dpaa2: not in enabled drivers build config 00:02:05.654 dma/hisilicon: not in enabled drivers build config 00:02:05.654 dma/idxd: not in enabled drivers build config 00:02:05.654 dma/ioat: not in enabled drivers build config 00:02:05.654 dma/skeleton: not in enabled drivers build config 00:02:05.654 net/af_packet: not in enabled drivers build config 00:02:05.654 net/af_xdp: not in enabled drivers build config 00:02:05.654 net/ark: not in enabled drivers build config 00:02:05.654 net/atlantic: not in enabled drivers build config 00:02:05.654 net/avp: not in enabled drivers build config 00:02:05.654 net/axgbe: not in enabled drivers build config 00:02:05.654 net/bnx2x: not in enabled drivers build config 00:02:05.654 net/bnxt: not in enabled drivers build config 00:02:05.654 net/bonding: not in enabled drivers build config 00:02:05.654 net/cnxk: not in enabled drivers build config 00:02:05.654 net/cpfl: not in enabled drivers build config 00:02:05.654 net/cxgbe: not in enabled drivers build config 00:02:05.654 net/dpaa: not in enabled drivers build config 00:02:05.654 net/dpaa2: not in enabled drivers build config 00:02:05.654 net/e1000: not in enabled drivers build config 00:02:05.654 net/ena: not in enabled drivers build config 00:02:05.654 net/enetc: not in enabled drivers build config 00:02:05.654 net/enetfec: not in enabled drivers build config 00:02:05.654 net/enic: not in enabled drivers build config 00:02:05.654 net/failsafe: not in enabled drivers build config 00:02:05.654 net/fm10k: not in enabled drivers build config 00:02:05.654 net/gve: not in enabled drivers build config 00:02:05.654 net/hinic: not in enabled drivers build config 00:02:05.654 net/hns3: not in enabled drivers build config 00:02:05.654 net/i40e: not in enabled drivers build config 00:02:05.654 net/iavf: not in enabled drivers build config 00:02:05.654 net/ice: not in enabled drivers build config 00:02:05.654 net/idpf: not in enabled drivers build config 00:02:05.654 net/igc: not in enabled drivers build config 00:02:05.654 net/ionic: not in enabled drivers build config 00:02:05.654 net/ipn3ke: not in enabled drivers build config 00:02:05.654 net/ixgbe: not in enabled drivers build config 00:02:05.654 net/mana: not in enabled drivers build config 00:02:05.654 net/memif: not in enabled drivers build config 00:02:05.654 net/mlx4: not in enabled drivers build config 00:02:05.654 net/mlx5: not in enabled drivers build config 00:02:05.654 net/mvneta: not in enabled drivers build config 00:02:05.654 net/mvpp2: not in enabled drivers build config 00:02:05.654 net/netvsc: not in enabled drivers build config 00:02:05.654 net/nfb: not in enabled drivers build config 00:02:05.654 net/nfp: not in enabled drivers build config 00:02:05.654 net/ngbe: not in enabled drivers build config 00:02:05.654 net/null: not in enabled drivers build config 00:02:05.654 net/octeontx: not in enabled drivers build config 00:02:05.654 net/octeon_ep: not in enabled drivers build config 00:02:05.654 net/pcap: not in enabled drivers build config 00:02:05.654 net/pfe: not in enabled drivers build config 00:02:05.654 net/qede: not in enabled drivers build config 00:02:05.654 net/ring: not in enabled drivers build config 00:02:05.654 net/sfc: not in enabled drivers build config 00:02:05.654 net/softnic: not in enabled drivers build config 00:02:05.654 net/tap: not in enabled drivers build config 00:02:05.654 net/thunderx: not in enabled drivers build config 00:02:05.654 net/txgbe: not in enabled drivers build config 00:02:05.654 net/vdev_netvsc: not in enabled drivers build config 00:02:05.654 net/vhost: not in enabled drivers build config 00:02:05.654 net/virtio: not in enabled drivers build config 00:02:05.654 net/vmxnet3: not in enabled drivers build config 00:02:05.654 raw/*: missing internal dependency, "rawdev" 00:02:05.654 crypto/armv8: not in enabled drivers build config 00:02:05.654 crypto/bcmfs: not in enabled drivers build config 00:02:05.654 crypto/caam_jr: not in enabled drivers build config 00:02:05.654 crypto/ccp: not in enabled drivers build config 00:02:05.654 crypto/cnxk: not in enabled drivers build config 00:02:05.654 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.654 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.654 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.654 crypto/mlx5: not in enabled drivers build config 00:02:05.654 crypto/mvsam: not in enabled drivers build config 00:02:05.654 crypto/nitrox: not in enabled drivers build config 00:02:05.654 crypto/null: not in enabled drivers build config 00:02:05.654 crypto/octeontx: not in enabled drivers build config 00:02:05.654 crypto/openssl: not in enabled drivers build config 00:02:05.654 crypto/scheduler: not in enabled drivers build config 00:02:05.654 crypto/uadk: not in enabled drivers build config 00:02:05.654 crypto/virtio: not in enabled drivers build config 00:02:05.654 compress/isal: not in enabled drivers build config 00:02:05.654 compress/mlx5: not in enabled drivers build config 00:02:05.654 compress/nitrox: not in enabled drivers build config 00:02:05.654 compress/octeontx: not in enabled drivers build config 00:02:05.654 compress/zlib: not in enabled drivers build config 00:02:05.654 regex/*: missing internal dependency, "regexdev" 00:02:05.654 ml/*: missing internal dependency, "mldev" 00:02:05.654 vdpa/ifc: not in enabled drivers build config 00:02:05.654 vdpa/mlx5: not in enabled drivers build config 00:02:05.654 vdpa/nfp: not in enabled drivers build config 00:02:05.654 vdpa/sfc: not in enabled drivers build config 00:02:05.654 event/*: missing internal dependency, "eventdev" 00:02:05.654 baseband/*: missing internal dependency, "bbdev" 00:02:05.654 gpu/*: missing internal dependency, "gpudev" 00:02:05.654 00:02:05.654 00:02:05.912 Build targets in project: 85 00:02:05.912 00:02:05.912 DPDK 24.03.0 00:02:05.912 00:02:05.912 User defined options 00:02:05.912 buildtype : debug 00:02:05.912 default_library : shared 00:02:05.912 libdir : lib 00:02:05.912 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.912 b_sanitize : address 00:02:05.912 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.912 c_link_args : 00:02:05.912 cpu_instruction_set: native 00:02:05.912 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.912 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.912 enable_docs : false 00:02:05.912 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.912 enable_kmods : false 00:02:05.912 max_lcores : 128 00:02:05.912 tests : false 00:02:05.912 00:02:05.912 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.479 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:06.479 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.735 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.735 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.736 [4/268] Linking static target lib/librte_kvargs.a 00:02:06.736 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.736 [6/268] Linking static target lib/librte_log.a 00:02:07.301 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.301 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.301 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.301 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.560 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.560 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.560 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.560 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.821 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.821 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.821 [17/268] Linking static target lib/librte_telemetry.a 00:02:07.821 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.821 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.821 [20/268] Linking target lib/librte_log.so.24.1 00:02:08.079 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.079 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.337 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.337 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.337 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.337 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.337 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.595 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.595 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.595 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.595 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.595 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.595 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:08.867 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.867 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.177 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.177 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.177 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.435 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.435 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.435 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.436 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.436 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.436 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.694 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.694 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.952 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.952 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.952 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.209 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.467 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.467 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.725 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.725 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.725 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.725 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.725 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.725 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.983 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.983 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.241 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.241 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.241 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.498 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.498 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.498 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.498 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.756 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.756 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.756 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.014 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.014 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.014 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.014 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.014 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.014 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.271 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.271 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.271 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.530 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.530 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.530 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.788 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.788 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.788 [85/268] Linking static target lib/librte_ring.a 00:02:12.788 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.047 [87/268] Linking static target lib/librte_eal.a 00:02:13.047 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.047 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.047 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.304 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.304 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.562 [93/268] Linking static target lib/librte_mempool.a 00:02:13.562 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.562 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.562 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.562 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.562 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.562 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.820 [100/268] Linking static target lib/librte_rcu.a 00:02:13.820 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.820 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.079 [103/268] Linking static target lib/librte_mbuf.a 00:02:14.079 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.339 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.339 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.339 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.339 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.339 [109/268] Linking static target lib/librte_net.a 00:02:14.339 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.339 [111/268] Linking static target lib/librte_meter.a 00:02:14.597 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.855 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.855 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.855 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.855 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.855 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.113 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.113 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.679 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.679 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.679 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.937 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.937 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.937 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.937 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.196 [127/268] Linking static target lib/librte_pci.a 00:02:16.196 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.196 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.455 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.455 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.455 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.455 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.455 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.713 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.713 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.713 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.713 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.713 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.713 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.713 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.993 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.993 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.993 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.268 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.268 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.268 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.268 [148/268] Linking static target lib/librte_cmdline.a 00:02:17.526 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.526 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.526 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.786 [152/268] Linking static target lib/librte_timer.a 00:02:17.786 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.786 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.044 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.044 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.302 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.302 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.302 [159/268] Linking static target lib/librte_ethdev.a 00:02:18.302 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.561 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.561 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.561 [163/268] Linking static target lib/librte_compressdev.a 00:02:18.819 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.819 [165/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.819 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.819 [167/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.077 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.077 [169/268] Linking static target lib/librte_dmadev.a 00:02:19.077 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.077 [171/268] Linking static target lib/librte_hash.a 00:02:19.077 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.336 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.594 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.594 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.594 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.853 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.853 [178/268] Linking static target lib/librte_cryptodev.a 00:02:19.853 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.853 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.853 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.111 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.111 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.369 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.369 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.369 [186/268] Linking static target lib/librte_power.a 00:02:20.628 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.628 [188/268] Linking static target lib/librte_reorder.a 00:02:20.887 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.887 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.887 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.887 [192/268] Linking static target lib/librte_security.a 00:02:20.887 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.146 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.713 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.713 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.713 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.713 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.971 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.229 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.488 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.488 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.746 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.746 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.746 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.746 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.004 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.004 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.004 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.262 [210/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.262 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.262 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.262 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.262 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.262 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.520 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.778 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.778 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.778 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.778 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.778 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:23.778 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.778 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.039 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.040 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.040 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.040 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.628 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.887 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.887 [230/268] Linking target lib/librte_eal.so.24.1 00:02:25.145 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.145 [232/268] Linking target lib/librte_ring.so.24.1 00:02:25.145 [233/268] Linking target lib/librte_meter.so.24.1 00:02:25.145 [234/268] Linking target lib/librte_pci.so.24.1 00:02:25.145 [235/268] Linking target lib/librte_timer.so.24.1 00:02:25.145 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:25.145 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.403 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.403 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.403 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.403 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.403 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.403 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:25.403 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:25.403 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.661 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.661 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.661 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.661 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.918 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.918 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:25.918 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.918 [253/268] Linking target lib/librte_net.so.24.1 00:02:25.918 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.918 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.918 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.176 [257/268] Linking target lib/librte_hash.so.24.1 00:02:26.176 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.176 [259/268] Linking target lib/librte_security.so.24.1 00:02:26.176 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.433 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.433 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.690 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.690 [264/268] Linking target lib/librte_power.so.24.1 00:02:29.219 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.220 [266/268] Linking static target lib/librte_vhost.a 00:02:30.595 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.853 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:30.853 INFO: autodetecting backend as ninja 00:02:30.853 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:52.785 CC lib/log/log.o 00:02:52.785 CC lib/log/log_flags.o 00:02:52.785 CC lib/log/log_deprecated.o 00:02:52.785 CC lib/ut_mock/mock.o 00:02:52.785 CC lib/ut/ut.o 00:02:52.785 LIB libspdk_ut.a 00:02:52.785 LIB libspdk_log.a 00:02:52.785 SO libspdk_ut.so.2.0 00:02:52.785 LIB libspdk_ut_mock.a 00:02:52.785 SO libspdk_log.so.7.0 00:02:52.785 SO libspdk_ut_mock.so.6.0 00:02:52.785 SYMLINK libspdk_ut.so 00:02:52.785 SYMLINK libspdk_log.so 00:02:52.785 SYMLINK libspdk_ut_mock.so 00:02:52.785 CXX lib/trace_parser/trace.o 00:02:52.785 CC lib/dma/dma.o 00:02:52.785 CC lib/ioat/ioat.o 00:02:52.785 CC lib/util/base64.o 00:02:52.785 CC lib/util/bit_array.o 00:02:52.785 CC lib/util/cpuset.o 00:02:52.785 CC lib/util/crc16.o 00:02:52.785 CC lib/util/crc32c.o 00:02:52.785 CC lib/util/crc32.o 00:02:52.785 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.785 CC lib/util/crc32_ieee.o 00:02:52.785 CC lib/util/crc64.o 00:02:52.785 LIB libspdk_dma.a 00:02:52.785 CC lib/util/dif.o 00:02:52.785 SO libspdk_dma.so.5.0 00:02:52.785 CC lib/util/fd.o 00:02:52.785 CC lib/util/fd_group.o 00:02:52.785 CC lib/util/file.o 00:02:52.785 LIB libspdk_ioat.a 00:02:52.785 CC lib/util/hexlify.o 00:02:52.785 SYMLINK libspdk_dma.so 00:02:52.785 CC lib/util/iov.o 00:02:52.785 SO libspdk_ioat.so.7.0 00:02:52.785 CC lib/vfio_user/host/vfio_user.o 00:02:52.785 CC lib/util/math.o 00:02:52.785 SYMLINK libspdk_ioat.so 00:02:52.785 CC lib/util/net.o 00:02:52.785 CC lib/util/pipe.o 00:02:52.785 CC lib/util/strerror_tls.o 00:02:52.785 CC lib/util/string.o 00:02:52.785 CC lib/util/uuid.o 00:02:52.785 CC lib/util/xor.o 00:02:52.785 CC lib/util/zipf.o 00:02:52.785 LIB libspdk_vfio_user.a 00:02:52.785 CC lib/util/md5.o 00:02:52.785 SO libspdk_vfio_user.so.5.0 00:02:52.785 SYMLINK libspdk_vfio_user.so 00:02:53.350 LIB libspdk_util.a 00:02:53.350 SO libspdk_util.so.10.0 00:02:53.607 LIB libspdk_trace_parser.a 00:02:53.607 SYMLINK libspdk_util.so 00:02:53.607 SO libspdk_trace_parser.so.6.0 00:02:53.865 SYMLINK libspdk_trace_parser.so 00:02:53.865 CC lib/json/json_parse.o 00:02:53.865 CC lib/json/json_write.o 00:02:53.865 CC lib/json/json_util.o 00:02:53.865 CC lib/rdma_utils/rdma_utils.o 00:02:53.865 CC lib/idxd/idxd.o 00:02:53.865 CC lib/idxd/idxd_user.o 00:02:53.865 CC lib/vmd/vmd.o 00:02:53.865 CC lib/conf/conf.o 00:02:53.865 CC lib/rdma_provider/common.o 00:02:53.865 CC lib/env_dpdk/env.o 00:02:54.122 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:54.122 CC lib/idxd/idxd_kernel.o 00:02:54.122 LIB libspdk_conf.a 00:02:54.122 SO libspdk_conf.so.6.0 00:02:54.122 CC lib/vmd/led.o 00:02:54.122 SYMLINK libspdk_conf.so 00:02:54.122 CC lib/env_dpdk/memory.o 00:02:54.122 CC lib/env_dpdk/pci.o 00:02:54.122 LIB libspdk_json.a 00:02:54.378 LIB libspdk_rdma_utils.a 00:02:54.378 SO libspdk_json.so.6.0 00:02:54.378 SO libspdk_rdma_utils.so.1.0 00:02:54.378 SYMLINK libspdk_json.so 00:02:54.378 SYMLINK libspdk_rdma_utils.so 00:02:54.378 CC lib/env_dpdk/init.o 00:02:54.378 CC lib/env_dpdk/threads.o 00:02:54.378 LIB libspdk_rdma_provider.a 00:02:54.378 CC lib/env_dpdk/pci_ioat.o 00:02:54.378 SO libspdk_rdma_provider.so.6.0 00:02:54.639 CC lib/env_dpdk/pci_virtio.o 00:02:54.639 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.639 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.639 SYMLINK libspdk_rdma_provider.so 00:02:54.639 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.898 LIB libspdk_idxd.a 00:02:54.898 CC lib/env_dpdk/pci_vmd.o 00:02:54.898 SO libspdk_idxd.so.12.1 00:02:54.898 LIB libspdk_vmd.a 00:02:54.898 CC lib/env_dpdk/pci_idxd.o 00:02:54.898 SO libspdk_vmd.so.6.0 00:02:55.156 SYMLINK libspdk_idxd.so 00:02:55.156 CC lib/env_dpdk/pci_event.o 00:02:55.156 CC lib/env_dpdk/sigbus_handler.o 00:02:55.156 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.156 SYMLINK libspdk_vmd.so 00:02:55.156 CC lib/env_dpdk/pci_dpdk.o 00:02:55.156 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.156 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.413 LIB libspdk_jsonrpc.a 00:02:55.413 SO libspdk_jsonrpc.so.6.0 00:02:55.413 SYMLINK libspdk_jsonrpc.so 00:02:55.671 CC lib/rpc/rpc.o 00:02:55.929 LIB libspdk_rpc.a 00:02:55.929 SO libspdk_rpc.so.6.0 00:02:56.186 SYMLINK libspdk_rpc.so 00:02:56.186 CC lib/notify/notify.o 00:02:56.186 CC lib/notify/notify_rpc.o 00:02:56.186 LIB libspdk_env_dpdk.a 00:02:56.186 CC lib/keyring/keyring.o 00:02:56.186 CC lib/keyring/keyring_rpc.o 00:02:56.186 CC lib/trace/trace.o 00:02:56.186 CC lib/trace/trace_flags.o 00:02:56.443 CC lib/trace/trace_rpc.o 00:02:56.443 SO libspdk_env_dpdk.so.15.0 00:02:56.443 LIB libspdk_notify.a 00:02:56.443 SO libspdk_notify.so.6.0 00:02:56.443 SYMLINK libspdk_env_dpdk.so 00:02:56.443 SYMLINK libspdk_notify.so 00:02:56.701 LIB libspdk_keyring.a 00:02:56.701 LIB libspdk_trace.a 00:02:56.701 SO libspdk_keyring.so.2.0 00:02:56.701 SO libspdk_trace.so.11.0 00:02:56.701 SYMLINK libspdk_keyring.so 00:02:56.701 SYMLINK libspdk_trace.so 00:02:56.959 CC lib/thread/thread.o 00:02:56.959 CC lib/thread/iobuf.o 00:02:56.959 CC lib/sock/sock.o 00:02:56.959 CC lib/sock/sock_rpc.o 00:02:57.525 LIB libspdk_sock.a 00:02:57.525 SO libspdk_sock.so.10.0 00:02:57.525 SYMLINK libspdk_sock.so 00:02:57.782 CC lib/nvme/nvme_ctrlr.o 00:02:57.782 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.782 CC lib/nvme/nvme_fabric.o 00:02:57.782 CC lib/nvme/nvme_ns_cmd.o 00:02:57.782 CC lib/nvme/nvme_ns.o 00:02:57.782 CC lib/nvme/nvme_pcie_common.o 00:02:57.782 CC lib/nvme/nvme_pcie.o 00:02:57.782 CC lib/nvme/nvme_qpair.o 00:02:57.782 CC lib/nvme/nvme.o 00:02:59.154 CC lib/nvme/nvme_quirks.o 00:02:59.154 CC lib/nvme/nvme_transport.o 00:02:59.154 CC lib/nvme/nvme_discovery.o 00:02:59.154 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.154 LIB libspdk_thread.a 00:02:59.154 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.154 SO libspdk_thread.so.10.2 00:02:59.154 CC lib/nvme/nvme_tcp.o 00:02:59.154 CC lib/nvme/nvme_opal.o 00:02:59.154 SYMLINK libspdk_thread.so 00:02:59.154 CC lib/nvme/nvme_io_msg.o 00:02:59.412 CC lib/nvme/nvme_poll_group.o 00:02:59.412 CC lib/nvme/nvme_zns.o 00:02:59.670 CC lib/nvme/nvme_stubs.o 00:02:59.928 CC lib/accel/accel.o 00:02:59.928 CC lib/accel/accel_rpc.o 00:02:59.928 CC lib/nvme/nvme_auth.o 00:03:00.186 CC lib/nvme/nvme_cuse.o 00:03:00.186 CC lib/blob/blobstore.o 00:03:00.186 CC lib/blob/request.o 00:03:00.186 CC lib/blob/zeroes.o 00:03:00.186 CC lib/blob/blob_bs_dev.o 00:03:00.443 CC lib/init/json_config.o 00:03:00.701 CC lib/virtio/virtio.o 00:03:00.701 CC lib/fsdev/fsdev.o 00:03:00.701 CC lib/init/subsystem.o 00:03:00.701 CC lib/init/subsystem_rpc.o 00:03:00.960 CC lib/accel/accel_sw.o 00:03:00.960 CC lib/init/rpc.o 00:03:00.960 CC lib/virtio/virtio_vhost_user.o 00:03:01.218 LIB libspdk_init.a 00:03:01.218 CC lib/nvme/nvme_rdma.o 00:03:01.218 CC lib/fsdev/fsdev_io.o 00:03:01.218 SO libspdk_init.so.6.0 00:03:01.218 CC lib/virtio/virtio_vfio_user.o 00:03:01.218 CC lib/virtio/virtio_pci.o 00:03:01.218 SYMLINK libspdk_init.so 00:03:01.218 CC lib/fsdev/fsdev_rpc.o 00:03:01.475 CC lib/event/app.o 00:03:01.475 CC lib/event/reactor.o 00:03:01.475 CC lib/event/log_rpc.o 00:03:01.475 CC lib/event/app_rpc.o 00:03:01.746 LIB libspdk_accel.a 00:03:01.746 LIB libspdk_virtio.a 00:03:01.746 CC lib/event/scheduler_static.o 00:03:01.746 SO libspdk_accel.so.16.0 00:03:01.746 SO libspdk_virtio.so.7.0 00:03:01.746 LIB libspdk_fsdev.a 00:03:01.746 SO libspdk_fsdev.so.1.0 00:03:01.746 SYMLINK libspdk_accel.so 00:03:01.746 SYMLINK libspdk_virtio.so 00:03:01.746 SYMLINK libspdk_fsdev.so 00:03:02.044 CC lib/bdev/bdev.o 00:03:02.044 CC lib/bdev/bdev_rpc.o 00:03:02.044 CC lib/bdev/bdev_zone.o 00:03:02.044 CC lib/bdev/part.o 00:03:02.044 CC lib/bdev/scsi_nvme.o 00:03:02.044 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:02.302 LIB libspdk_event.a 00:03:02.302 SO libspdk_event.so.15.0 00:03:02.560 SYMLINK libspdk_event.so 00:03:02.818 LIB libspdk_fuse_dispatcher.a 00:03:03.076 SO libspdk_fuse_dispatcher.so.1.0 00:03:03.076 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.076 LIB libspdk_nvme.a 00:03:03.334 SO libspdk_nvme.so.14.0 00:03:03.593 SYMLINK libspdk_nvme.so 00:03:04.966 LIB libspdk_blob.a 00:03:04.966 SO libspdk_blob.so.11.0 00:03:04.966 SYMLINK libspdk_blob.so 00:03:05.224 CC lib/lvol/lvol.o 00:03:05.225 CC lib/blobfs/blobfs.o 00:03:05.225 CC lib/blobfs/tree.o 00:03:06.160 LIB libspdk_bdev.a 00:03:06.160 SO libspdk_bdev.so.17.0 00:03:06.160 SYMLINK libspdk_bdev.so 00:03:06.419 LIB libspdk_blobfs.a 00:03:06.419 SO libspdk_blobfs.so.10.0 00:03:06.419 CC lib/nbd/nbd.o 00:03:06.419 CC lib/nbd/nbd_rpc.o 00:03:06.419 CC lib/scsi/dev.o 00:03:06.419 CC lib/ftl/ftl_core.o 00:03:06.419 CC lib/scsi/lun.o 00:03:06.419 CC lib/scsi/port.o 00:03:06.419 CC lib/ublk/ublk.o 00:03:06.419 CC lib/nvmf/ctrlr.o 00:03:06.419 SYMLINK libspdk_blobfs.so 00:03:06.419 CC lib/ftl/ftl_init.o 00:03:06.677 LIB libspdk_lvol.a 00:03:06.677 CC lib/ftl/ftl_layout.o 00:03:06.677 SO libspdk_lvol.so.10.0 00:03:06.677 CC lib/scsi/scsi.o 00:03:06.677 CC lib/ftl/ftl_debug.o 00:03:06.677 SYMLINK libspdk_lvol.so 00:03:06.677 CC lib/ftl/ftl_io.o 00:03:06.677 CC lib/ftl/ftl_sb.o 00:03:06.935 CC lib/ftl/ftl_l2p.o 00:03:06.935 CC lib/scsi/scsi_bdev.o 00:03:06.935 CC lib/scsi/scsi_pr.o 00:03:06.935 LIB libspdk_nbd.a 00:03:06.935 CC lib/ftl/ftl_l2p_flat.o 00:03:06.935 CC lib/nvmf/ctrlr_discovery.o 00:03:06.935 SO libspdk_nbd.so.7.0 00:03:07.193 CC lib/nvmf/ctrlr_bdev.o 00:03:07.193 SYMLINK libspdk_nbd.so 00:03:07.193 CC lib/nvmf/subsystem.o 00:03:07.193 CC lib/nvmf/nvmf.o 00:03:07.193 CC lib/nvmf/nvmf_rpc.o 00:03:07.193 CC lib/ftl/ftl_nv_cache.o 00:03:07.451 CC lib/ftl/ftl_band.o 00:03:07.451 CC lib/ublk/ublk_rpc.o 00:03:07.709 LIB libspdk_ublk.a 00:03:07.709 CC lib/scsi/scsi_rpc.o 00:03:07.709 SO libspdk_ublk.so.3.0 00:03:07.709 CC lib/nvmf/transport.o 00:03:07.709 SYMLINK libspdk_ublk.so 00:03:07.709 CC lib/nvmf/tcp.o 00:03:07.709 CC lib/scsi/task.o 00:03:07.967 CC lib/ftl/ftl_band_ops.o 00:03:07.967 CC lib/nvmf/stubs.o 00:03:07.967 LIB libspdk_scsi.a 00:03:08.225 SO libspdk_scsi.so.9.0 00:03:08.225 SYMLINK libspdk_scsi.so 00:03:08.225 CC lib/nvmf/mdns_server.o 00:03:08.225 CC lib/nvmf/rdma.o 00:03:08.225 CC lib/nvmf/auth.o 00:03:08.483 CC lib/ftl/ftl_writer.o 00:03:08.483 CC lib/ftl/ftl_rq.o 00:03:08.483 CC lib/iscsi/conn.o 00:03:08.741 CC lib/ftl/ftl_reloc.o 00:03:08.741 CC lib/ftl/ftl_l2p_cache.o 00:03:08.741 CC lib/ftl/ftl_p2l.o 00:03:08.741 CC lib/ftl/ftl_p2l_log.o 00:03:08.741 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.999 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.999 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.257 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.257 CC lib/iscsi/init_grp.o 00:03:09.257 CC lib/iscsi/iscsi.o 00:03:09.257 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:09.257 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.257 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.257 CC lib/iscsi/param.o 00:03:09.555 CC lib/vhost/vhost.o 00:03:09.555 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:09.555 CC lib/iscsi/portal_grp.o 00:03:09.555 CC lib/iscsi/tgt_node.o 00:03:09.555 CC lib/iscsi/iscsi_subsystem.o 00:03:09.814 CC lib/iscsi/iscsi_rpc.o 00:03:09.814 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:09.814 CC lib/iscsi/task.o 00:03:09.814 CC lib/vhost/vhost_rpc.o 00:03:09.814 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.072 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.072 CC lib/vhost/vhost_scsi.o 00:03:10.072 CC lib/vhost/vhost_blk.o 00:03:10.072 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.072 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.072 CC lib/ftl/utils/ftl_conf.o 00:03:10.329 CC lib/ftl/utils/ftl_md.o 00:03:10.329 CC lib/ftl/utils/ftl_mempool.o 00:03:10.329 CC lib/ftl/utils/ftl_bitmap.o 00:03:10.329 CC lib/ftl/utils/ftl_property.o 00:03:10.587 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.587 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.587 CC lib/vhost/rte_vhost_user.o 00:03:10.587 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.845 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.845 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.845 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.845 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:10.845 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:11.103 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:11.103 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.103 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.103 LIB libspdk_iscsi.a 00:03:11.362 SO libspdk_iscsi.so.8.0 00:03:11.362 LIB libspdk_nvmf.a 00:03:11.362 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:11.362 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:11.362 CC lib/ftl/base/ftl_base_dev.o 00:03:11.362 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.362 CC lib/ftl/ftl_trace.o 00:03:11.362 SO libspdk_nvmf.so.19.0 00:03:11.362 SYMLINK libspdk_iscsi.so 00:03:11.619 LIB libspdk_ftl.a 00:03:11.619 SYMLINK libspdk_nvmf.so 00:03:11.877 SO libspdk_ftl.so.9.0 00:03:11.877 LIB libspdk_vhost.a 00:03:12.134 SO libspdk_vhost.so.8.0 00:03:12.134 SYMLINK libspdk_vhost.so 00:03:12.134 SYMLINK libspdk_ftl.so 00:03:12.701 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.701 CC module/sock/posix/posix.o 00:03:12.701 CC module/accel/ioat/accel_ioat.o 00:03:12.701 CC module/keyring/file/keyring.o 00:03:12.701 CC module/keyring/linux/keyring.o 00:03:12.701 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.701 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.701 CC module/accel/error/accel_error.o 00:03:12.701 CC module/fsdev/aio/fsdev_aio.o 00:03:12.701 CC module/blob/bdev/blob_bdev.o 00:03:12.701 LIB libspdk_env_dpdk_rpc.a 00:03:12.701 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.958 CC module/keyring/linux/keyring_rpc.o 00:03:12.958 CC module/keyring/file/keyring_rpc.o 00:03:12.958 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.958 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.958 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:12.958 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.958 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.958 CC module/accel/error/accel_error_rpc.o 00:03:12.958 LIB libspdk_scheduler_dynamic.a 00:03:12.958 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.958 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.958 LIB libspdk_keyring_linux.a 00:03:12.958 LIB libspdk_keyring_file.a 00:03:12.958 SO libspdk_keyring_linux.so.1.0 00:03:12.958 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.958 LIB libspdk_blob_bdev.a 00:03:12.958 LIB libspdk_accel_ioat.a 00:03:12.958 SO libspdk_blob_bdev.so.11.0 00:03:12.958 SO libspdk_keyring_file.so.2.0 00:03:12.958 LIB libspdk_accel_error.a 00:03:13.215 SO libspdk_accel_ioat.so.6.0 00:03:13.215 SYMLINK libspdk_keyring_linux.so 00:03:13.215 SO libspdk_accel_error.so.2.0 00:03:13.215 SYMLINK libspdk_keyring_file.so 00:03:13.215 SYMLINK libspdk_blob_bdev.so 00:03:13.215 CC module/fsdev/aio/linux_aio_mgr.o 00:03:13.215 SYMLINK libspdk_accel_ioat.so 00:03:13.215 SYMLINK libspdk_accel_error.so 00:03:13.215 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.215 CC module/accel/iaa/accel_iaa.o 00:03:13.215 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.215 CC module/accel/dsa/accel_dsa.o 00:03:13.215 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.473 LIB libspdk_scheduler_gscheduler.a 00:03:13.473 SO libspdk_scheduler_gscheduler.so.4.0 00:03:13.473 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.473 CC module/bdev/delay/vbdev_delay.o 00:03:13.473 SYMLINK libspdk_scheduler_gscheduler.so 00:03:13.473 LIB libspdk_accel_iaa.a 00:03:13.473 SO libspdk_accel_iaa.so.3.0 00:03:13.473 CC module/bdev/error/vbdev_error.o 00:03:13.473 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.473 CC module/bdev/gpt/gpt.o 00:03:13.730 SYMLINK libspdk_accel_iaa.so 00:03:13.730 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.730 LIB libspdk_fsdev_aio.a 00:03:13.730 LIB libspdk_accel_dsa.a 00:03:13.730 CC module/bdev/malloc/bdev_malloc.o 00:03:13.730 SO libspdk_accel_dsa.so.5.0 00:03:13.730 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.730 SO libspdk_fsdev_aio.so.1.0 00:03:13.730 LIB libspdk_sock_posix.a 00:03:13.730 SO libspdk_sock_posix.so.6.0 00:03:13.730 SYMLINK libspdk_accel_dsa.so 00:03:13.730 SYMLINK libspdk_fsdev_aio.so 00:03:13.730 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.730 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.730 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.730 SYMLINK libspdk_sock_posix.so 00:03:13.730 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.988 LIB libspdk_blobfs_bdev.a 00:03:13.988 SO libspdk_blobfs_bdev.so.6.0 00:03:13.988 LIB libspdk_bdev_delay.a 00:03:13.988 LIB libspdk_bdev_gpt.a 00:03:13.988 LIB libspdk_bdev_error.a 00:03:13.988 SO libspdk_bdev_delay.so.6.0 00:03:13.988 SO libspdk_bdev_gpt.so.6.0 00:03:13.988 SYMLINK libspdk_blobfs_bdev.so 00:03:13.988 SO libspdk_bdev_error.so.6.0 00:03:13.988 CC module/bdev/null/bdev_null.o 00:03:13.988 SYMLINK libspdk_bdev_gpt.so 00:03:13.988 SYMLINK libspdk_bdev_delay.so 00:03:14.246 LIB libspdk_bdev_malloc.a 00:03:14.246 CC module/bdev/nvme/bdev_nvme.o 00:03:14.246 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.246 SO libspdk_bdev_malloc.so.6.0 00:03:14.246 SYMLINK libspdk_bdev_error.so 00:03:14.246 CC module/bdev/raid/bdev_raid.o 00:03:14.246 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.246 CC module/bdev/split/vbdev_split.o 00:03:14.246 SYMLINK libspdk_bdev_malloc.so 00:03:14.246 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.246 LIB libspdk_bdev_lvol.a 00:03:14.246 SO libspdk_bdev_lvol.so.6.0 00:03:14.504 CC module/bdev/xnvme/bdev_xnvme.o 00:03:14.504 CC module/bdev/null/bdev_null_rpc.o 00:03:14.504 SYMLINK libspdk_bdev_lvol.so 00:03:14.504 CC module/bdev/aio/bdev_aio.o 00:03:14.504 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.504 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.504 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.504 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.762 CC module/bdev/raid/raid0.o 00:03:14.762 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:14.762 LIB libspdk_bdev_null.a 00:03:14.762 SO libspdk_bdev_null.so.6.0 00:03:14.762 LIB libspdk_bdev_passthru.a 00:03:14.762 LIB libspdk_bdev_split.a 00:03:14.762 SO libspdk_bdev_passthru.so.6.0 00:03:14.762 SO libspdk_bdev_split.so.6.0 00:03:14.762 SYMLINK libspdk_bdev_null.so 00:03:14.762 SYMLINK libspdk_bdev_passthru.so 00:03:14.762 CC module/bdev/raid/raid1.o 00:03:14.762 CC module/bdev/raid/concat.o 00:03:14.762 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.762 SYMLINK libspdk_bdev_split.so 00:03:15.020 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.020 LIB libspdk_bdev_xnvme.a 00:03:15.020 SO libspdk_bdev_xnvme.so.3.0 00:03:15.020 LIB libspdk_bdev_zone_block.a 00:03:15.020 CC module/bdev/nvme/nvme_rpc.o 00:03:15.020 SO libspdk_bdev_zone_block.so.6.0 00:03:15.020 SYMLINK libspdk_bdev_xnvme.so 00:03:15.020 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.020 SYMLINK libspdk_bdev_zone_block.so 00:03:15.020 LIB libspdk_bdev_aio.a 00:03:15.020 CC module/bdev/nvme/vbdev_opal.o 00:03:15.278 SO libspdk_bdev_aio.so.6.0 00:03:15.278 CC module/bdev/ftl/bdev_ftl.o 00:03:15.278 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.278 SYMLINK libspdk_bdev_aio.so 00:03:15.278 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.278 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.278 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.536 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.536 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.536 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.536 LIB libspdk_bdev_ftl.a 00:03:15.536 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.536 SO libspdk_bdev_ftl.so.6.0 00:03:15.536 LIB libspdk_bdev_raid.a 00:03:15.536 SYMLINK libspdk_bdev_ftl.so 00:03:15.794 SO libspdk_bdev_raid.so.6.0 00:03:15.794 SYMLINK libspdk_bdev_raid.so 00:03:15.794 LIB libspdk_bdev_iscsi.a 00:03:15.794 SO libspdk_bdev_iscsi.so.6.0 00:03:16.052 SYMLINK libspdk_bdev_iscsi.so 00:03:16.052 LIB libspdk_bdev_virtio.a 00:03:16.310 SO libspdk_bdev_virtio.so.6.0 00:03:16.310 SYMLINK libspdk_bdev_virtio.so 00:03:17.687 LIB libspdk_bdev_nvme.a 00:03:17.687 SO libspdk_bdev_nvme.so.7.0 00:03:17.687 SYMLINK libspdk_bdev_nvme.so 00:03:18.253 CC module/event/subsystems/vmd/vmd.o 00:03:18.253 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.253 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.253 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.253 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.253 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.253 CC module/event/subsystems/keyring/keyring.o 00:03:18.253 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.253 CC module/event/subsystems/sock/sock.o 00:03:18.511 LIB libspdk_event_fsdev.a 00:03:18.511 LIB libspdk_event_vmd.a 00:03:18.511 LIB libspdk_event_sock.a 00:03:18.511 SO libspdk_event_fsdev.so.1.0 00:03:18.511 LIB libspdk_event_vhost_blk.a 00:03:18.511 LIB libspdk_event_scheduler.a 00:03:18.511 LIB libspdk_event_keyring.a 00:03:18.511 SO libspdk_event_sock.so.5.0 00:03:18.511 SO libspdk_event_vmd.so.6.0 00:03:18.511 SO libspdk_event_vhost_blk.so.3.0 00:03:18.511 SO libspdk_event_scheduler.so.4.0 00:03:18.511 SO libspdk_event_keyring.so.1.0 00:03:18.511 SYMLINK libspdk_event_fsdev.so 00:03:18.511 LIB libspdk_event_iobuf.a 00:03:18.511 SYMLINK libspdk_event_sock.so 00:03:18.511 SYMLINK libspdk_event_vmd.so 00:03:18.511 SYMLINK libspdk_event_scheduler.so 00:03:18.511 SYMLINK libspdk_event_vhost_blk.so 00:03:18.511 SYMLINK libspdk_event_keyring.so 00:03:18.511 SO libspdk_event_iobuf.so.3.0 00:03:18.768 SYMLINK libspdk_event_iobuf.so 00:03:19.025 CC module/event/subsystems/accel/accel.o 00:03:19.025 LIB libspdk_event_accel.a 00:03:19.025 SO libspdk_event_accel.so.6.0 00:03:19.283 SYMLINK libspdk_event_accel.so 00:03:19.541 CC module/event/subsystems/bdev/bdev.o 00:03:19.541 LIB libspdk_event_bdev.a 00:03:19.799 SO libspdk_event_bdev.so.6.0 00:03:19.799 SYMLINK libspdk_event_bdev.so 00:03:20.057 CC module/event/subsystems/scsi/scsi.o 00:03:20.057 CC module/event/subsystems/nbd/nbd.o 00:03:20.057 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.057 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.057 CC module/event/subsystems/ublk/ublk.o 00:03:20.057 LIB libspdk_event_nbd.a 00:03:20.057 LIB libspdk_event_scsi.a 00:03:20.315 LIB libspdk_event_ublk.a 00:03:20.315 SO libspdk_event_nbd.so.6.0 00:03:20.315 SO libspdk_event_scsi.so.6.0 00:03:20.315 SO libspdk_event_ublk.so.3.0 00:03:20.315 SYMLINK libspdk_event_nbd.so 00:03:20.315 SYMLINK libspdk_event_scsi.so 00:03:20.315 SYMLINK libspdk_event_ublk.so 00:03:20.315 LIB libspdk_event_nvmf.a 00:03:20.315 SO libspdk_event_nvmf.so.6.0 00:03:20.574 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.574 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.574 SYMLINK libspdk_event_nvmf.so 00:03:20.574 LIB libspdk_event_iscsi.a 00:03:20.574 LIB libspdk_event_vhost_scsi.a 00:03:20.832 SO libspdk_event_iscsi.so.6.0 00:03:20.832 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.832 SYMLINK libspdk_event_iscsi.so 00:03:20.832 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.832 SO libspdk.so.6.0 00:03:20.832 SYMLINK libspdk.so 00:03:21.090 CXX app/trace/trace.o 00:03:21.090 TEST_HEADER include/spdk/accel.h 00:03:21.090 TEST_HEADER include/spdk/accel_module.h 00:03:21.090 TEST_HEADER include/spdk/assert.h 00:03:21.090 TEST_HEADER include/spdk/barrier.h 00:03:21.090 TEST_HEADER include/spdk/base64.h 00:03:21.090 CC app/trace_record/trace_record.o 00:03:21.090 TEST_HEADER include/spdk/bdev.h 00:03:21.090 TEST_HEADER include/spdk/bdev_module.h 00:03:21.090 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.090 TEST_HEADER include/spdk/bit_array.h 00:03:21.090 TEST_HEADER include/spdk/bit_pool.h 00:03:21.090 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.090 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.090 TEST_HEADER include/spdk/blobfs.h 00:03:21.090 TEST_HEADER include/spdk/blob.h 00:03:21.090 TEST_HEADER include/spdk/conf.h 00:03:21.090 TEST_HEADER include/spdk/config.h 00:03:21.090 TEST_HEADER include/spdk/cpuset.h 00:03:21.090 TEST_HEADER include/spdk/crc16.h 00:03:21.090 TEST_HEADER include/spdk/crc32.h 00:03:21.090 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.090 TEST_HEADER include/spdk/crc64.h 00:03:21.348 TEST_HEADER include/spdk/dif.h 00:03:21.348 TEST_HEADER include/spdk/dma.h 00:03:21.348 TEST_HEADER include/spdk/endian.h 00:03:21.348 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.348 TEST_HEADER include/spdk/env.h 00:03:21.348 TEST_HEADER include/spdk/event.h 00:03:21.348 TEST_HEADER include/spdk/fd_group.h 00:03:21.348 TEST_HEADER include/spdk/fd.h 00:03:21.348 TEST_HEADER include/spdk/file.h 00:03:21.348 TEST_HEADER include/spdk/fsdev.h 00:03:21.348 TEST_HEADER include/spdk/fsdev_module.h 00:03:21.348 TEST_HEADER include/spdk/ftl.h 00:03:21.348 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:21.348 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.348 TEST_HEADER include/spdk/hexlify.h 00:03:21.348 TEST_HEADER include/spdk/histogram_data.h 00:03:21.348 TEST_HEADER include/spdk/idxd.h 00:03:21.348 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.348 TEST_HEADER include/spdk/init.h 00:03:21.348 TEST_HEADER include/spdk/ioat.h 00:03:21.348 CC examples/ioat/perf/perf.o 00:03:21.348 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.348 CC test/thread/poller_perf/poller_perf.o 00:03:21.348 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.348 TEST_HEADER include/spdk/json.h 00:03:21.348 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.348 CC examples/util/zipf/zipf.o 00:03:21.348 TEST_HEADER include/spdk/keyring.h 00:03:21.348 TEST_HEADER include/spdk/keyring_module.h 00:03:21.348 TEST_HEADER include/spdk/likely.h 00:03:21.348 TEST_HEADER include/spdk/log.h 00:03:21.348 TEST_HEADER include/spdk/lvol.h 00:03:21.348 TEST_HEADER include/spdk/md5.h 00:03:21.348 TEST_HEADER include/spdk/memory.h 00:03:21.348 TEST_HEADER include/spdk/mmio.h 00:03:21.348 TEST_HEADER include/spdk/nbd.h 00:03:21.348 TEST_HEADER include/spdk/net.h 00:03:21.348 TEST_HEADER include/spdk/notify.h 00:03:21.348 TEST_HEADER include/spdk/nvme.h 00:03:21.348 TEST_HEADER include/spdk/nvme_intel.h 00:03:21.348 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:21.348 CC test/dma/test_dma/test_dma.o 00:03:21.348 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:21.348 TEST_HEADER include/spdk/nvme_spec.h 00:03:21.348 TEST_HEADER include/spdk/nvme_zns.h 00:03:21.348 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:21.348 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:21.348 TEST_HEADER include/spdk/nvmf.h 00:03:21.348 TEST_HEADER include/spdk/nvmf_spec.h 00:03:21.348 TEST_HEADER include/spdk/nvmf_transport.h 00:03:21.348 TEST_HEADER include/spdk/opal.h 00:03:21.348 TEST_HEADER include/spdk/opal_spec.h 00:03:21.348 TEST_HEADER include/spdk/pci_ids.h 00:03:21.348 TEST_HEADER include/spdk/pipe.h 00:03:21.348 TEST_HEADER include/spdk/queue.h 00:03:21.348 TEST_HEADER include/spdk/reduce.h 00:03:21.348 TEST_HEADER include/spdk/rpc.h 00:03:21.348 TEST_HEADER include/spdk/scheduler.h 00:03:21.348 CC test/app/bdev_svc/bdev_svc.o 00:03:21.348 TEST_HEADER include/spdk/scsi.h 00:03:21.348 TEST_HEADER include/spdk/scsi_spec.h 00:03:21.348 TEST_HEADER include/spdk/sock.h 00:03:21.348 TEST_HEADER include/spdk/stdinc.h 00:03:21.348 TEST_HEADER include/spdk/string.h 00:03:21.348 TEST_HEADER include/spdk/thread.h 00:03:21.348 TEST_HEADER include/spdk/trace.h 00:03:21.348 TEST_HEADER include/spdk/trace_parser.h 00:03:21.348 TEST_HEADER include/spdk/tree.h 00:03:21.348 TEST_HEADER include/spdk/ublk.h 00:03:21.348 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.348 TEST_HEADER include/spdk/util.h 00:03:21.348 TEST_HEADER include/spdk/uuid.h 00:03:21.348 TEST_HEADER include/spdk/version.h 00:03:21.348 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:21.348 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:21.348 TEST_HEADER include/spdk/vhost.h 00:03:21.348 TEST_HEADER include/spdk/vmd.h 00:03:21.348 TEST_HEADER include/spdk/xor.h 00:03:21.348 TEST_HEADER include/spdk/zipf.h 00:03:21.348 CXX test/cpp_headers/accel.o 00:03:21.348 LINK poller_perf 00:03:21.348 LINK interrupt_tgt 00:03:21.606 LINK zipf 00:03:21.606 LINK ioat_perf 00:03:21.606 LINK spdk_trace_record 00:03:21.606 LINK bdev_svc 00:03:21.606 CXX test/cpp_headers/accel_module.o 00:03:21.606 CXX test/cpp_headers/assert.o 00:03:21.606 CXX test/cpp_headers/barrier.o 00:03:21.606 CC examples/ioat/verify/verify.o 00:03:21.606 LINK spdk_trace 00:03:21.864 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.864 CC app/nvmf_tgt/nvmf_main.o 00:03:21.864 CC test/app/histogram_perf/histogram_perf.o 00:03:21.864 CXX test/cpp_headers/base64.o 00:03:21.864 CXX test/cpp_headers/bdev.o 00:03:21.864 CC test/app/jsoncat/jsoncat.o 00:03:21.864 LINK test_dma 00:03:21.864 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.121 LINK verify 00:03:22.121 LINK nvmf_tgt 00:03:22.121 LINK iscsi_tgt 00:03:22.121 LINK histogram_perf 00:03:22.121 LINK mem_callbacks 00:03:22.121 CXX test/cpp_headers/bdev_module.o 00:03:22.121 LINK jsoncat 00:03:22.121 CXX test/cpp_headers/bdev_zone.o 00:03:22.379 CC app/spdk_tgt/spdk_tgt.o 00:03:22.379 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:22.379 CC test/env/vtophys/vtophys.o 00:03:22.379 CC test/env/memory/memory_ut.o 00:03:22.379 CC app/spdk_lspci/spdk_lspci.o 00:03:22.379 CC examples/thread/thread/thread_ex.o 00:03:22.379 CC app/spdk_nvme_perf/perf.o 00:03:22.379 CXX test/cpp_headers/bit_array.o 00:03:22.637 LINK vtophys 00:03:22.637 LINK env_dpdk_post_init 00:03:22.637 LINK nvme_fuzz 00:03:22.637 LINK spdk_tgt 00:03:22.637 CC app/spdk_nvme_identify/identify.o 00:03:22.637 LINK spdk_lspci 00:03:22.637 CXX test/cpp_headers/bit_pool.o 00:03:22.637 LINK thread 00:03:22.895 CC test/env/pci/pci_ut.o 00:03:22.895 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.895 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.895 CC test/rpc_client/rpc_client_test.o 00:03:22.895 CXX test/cpp_headers/blob_bdev.o 00:03:22.895 CC app/spdk_top/spdk_top.o 00:03:23.152 LINK spdk_nvme_discover 00:03:23.152 LINK rpc_client_test 00:03:23.152 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.152 CC examples/sock/hello_world/hello_sock.o 00:03:23.410 CXX test/cpp_headers/blobfs.o 00:03:23.410 CC app/spdk_dd/spdk_dd.o 00:03:23.410 CC app/vhost/vhost.o 00:03:23.410 LINK hello_sock 00:03:23.410 LINK pci_ut 00:03:23.677 LINK spdk_nvme_perf 00:03:23.677 CXX test/cpp_headers/blob.o 00:03:23.677 LINK vhost 00:03:23.677 LINK spdk_nvme_identify 00:03:23.952 CXX test/cpp_headers/conf.o 00:03:23.952 CXX test/cpp_headers/config.o 00:03:23.952 LINK memory_ut 00:03:23.952 CC examples/vmd/lsvmd/lsvmd.o 00:03:23.952 LINK spdk_dd 00:03:23.952 CC examples/idxd/perf/perf.o 00:03:23.952 CXX test/cpp_headers/cpuset.o 00:03:23.952 CXX test/cpp_headers/crc16.o 00:03:23.952 LINK lsvmd 00:03:23.952 CC app/fio/nvme/fio_plugin.o 00:03:24.210 CC examples/accel/perf/accel_perf.o 00:03:24.210 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:24.210 LINK spdk_top 00:03:24.210 CXX test/cpp_headers/crc32.o 00:03:24.468 CC examples/vmd/led/led.o 00:03:24.468 LINK idxd_perf 00:03:24.468 LINK hello_fsdev 00:03:24.468 CC examples/blob/hello_world/hello_blob.o 00:03:24.468 CXX test/cpp_headers/crc64.o 00:03:24.468 CC test/accel/dif/dif.o 00:03:24.726 LINK led 00:03:24.726 CC examples/nvme/hello_world/hello_world.o 00:03:24.726 CXX test/cpp_headers/dif.o 00:03:24.726 CC examples/nvme/reconnect/reconnect.o 00:03:24.726 LINK hello_blob 00:03:24.984 LINK accel_perf 00:03:24.984 CC app/fio/bdev/fio_plugin.o 00:03:24.984 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.984 CXX test/cpp_headers/dma.o 00:03:24.984 CXX test/cpp_headers/endian.o 00:03:24.984 LINK hello_world 00:03:25.242 LINK spdk_nvme 00:03:25.242 CC examples/blob/cli/blobcli.o 00:03:25.242 CC examples/nvme/arbitration/arbitration.o 00:03:25.242 CC examples/nvme/hotplug/hotplug.o 00:03:25.242 CXX test/cpp_headers/env_dpdk.o 00:03:25.242 LINK reconnect 00:03:25.500 LINK iscsi_fuzz 00:03:25.500 CXX test/cpp_headers/env.o 00:03:25.500 CC test/blobfs/mkfs/mkfs.o 00:03:25.500 LINK spdk_bdev 00:03:25.500 LINK dif 00:03:25.500 CXX test/cpp_headers/event.o 00:03:25.757 LINK hotplug 00:03:25.757 LINK nvme_manage 00:03:25.757 LINK arbitration 00:03:25.757 CXX test/cpp_headers/fd_group.o 00:03:25.758 LINK mkfs 00:03:25.758 CXX test/cpp_headers/fd.o 00:03:25.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.016 CC test/event/reactor/reactor.o 00:03:26.016 CC test/event/event_perf/event_perf.o 00:03:26.016 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.016 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.016 LINK blobcli 00:03:26.016 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.016 CXX test/cpp_headers/file.o 00:03:26.016 CC test/event/reactor_perf/reactor_perf.o 00:03:26.016 CC test/app/stub/stub.o 00:03:26.016 LINK reactor 00:03:26.016 LINK event_perf 00:03:26.274 CXX test/cpp_headers/fsdev.o 00:03:26.274 LINK cmb_copy 00:03:26.274 LINK reactor_perf 00:03:26.274 LINK hello_bdev 00:03:26.274 CC test/lvol/esnap/esnap.o 00:03:26.274 LINK stub 00:03:26.274 CC test/event/app_repeat/app_repeat.o 00:03:26.274 CXX test/cpp_headers/fsdev_module.o 00:03:26.533 CC test/event/scheduler/scheduler.o 00:03:26.533 CC test/nvme/aer/aer.o 00:03:26.533 CC examples/nvme/abort/abort.o 00:03:26.533 LINK vhost_fuzz 00:03:26.533 CXX test/cpp_headers/ftl.o 00:03:26.533 LINK app_repeat 00:03:26.791 CC test/bdev/bdevio/bdevio.o 00:03:26.791 CC examples/bdev/bdevperf/bdevperf.o 00:03:26.791 LINK scheduler 00:03:26.791 CC test/nvme/reset/reset.o 00:03:26.791 CC test/nvme/sgl/sgl.o 00:03:26.791 CXX test/cpp_headers/fuse_dispatcher.o 00:03:26.791 LINK aer 00:03:26.791 CC test/nvme/e2edp/nvme_dp.o 00:03:27.049 CXX test/cpp_headers/gpt_spec.o 00:03:27.049 LINK abort 00:03:27.049 LINK reset 00:03:27.049 CXX test/cpp_headers/hexlify.o 00:03:27.049 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:27.049 CC test/nvme/overhead/overhead.o 00:03:27.049 LINK sgl 00:03:27.306 CXX test/cpp_headers/histogram_data.o 00:03:27.306 LINK nvme_dp 00:03:27.306 LINK bdevio 00:03:27.306 LINK pmr_persistence 00:03:27.306 CC test/nvme/err_injection/err_injection.o 00:03:27.306 CC test/nvme/startup/startup.o 00:03:27.564 CC test/nvme/reserve/reserve.o 00:03:27.564 CXX test/cpp_headers/idxd.o 00:03:27.564 LINK overhead 00:03:27.564 CC test/nvme/simple_copy/simple_copy.o 00:03:27.564 CXX test/cpp_headers/idxd_spec.o 00:03:27.564 CXX test/cpp_headers/init.o 00:03:27.564 LINK err_injection 00:03:27.564 LINK startup 00:03:27.823 LINK reserve 00:03:27.823 CXX test/cpp_headers/ioat.o 00:03:27.823 CC test/nvme/connect_stress/connect_stress.o 00:03:27.823 LINK bdevperf 00:03:27.823 CC test/nvme/boot_partition/boot_partition.o 00:03:27.823 CC test/nvme/compliance/nvme_compliance.o 00:03:27.823 LINK simple_copy 00:03:27.823 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.081 CXX test/cpp_headers/ioat_spec.o 00:03:28.081 LINK connect_stress 00:03:28.081 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:28.081 LINK boot_partition 00:03:28.081 CC test/nvme/fdp/fdp.o 00:03:28.081 CXX test/cpp_headers/iscsi_spec.o 00:03:28.081 CC test/nvme/cuse/cuse.o 00:03:28.081 LINK fused_ordering 00:03:28.339 CXX test/cpp_headers/json.o 00:03:28.339 CXX test/cpp_headers/jsonrpc.o 00:03:28.339 CC examples/nvmf/nvmf/nvmf.o 00:03:28.339 LINK nvme_compliance 00:03:28.339 LINK doorbell_aers 00:03:28.339 CXX test/cpp_headers/keyring.o 00:03:28.339 CXX test/cpp_headers/keyring_module.o 00:03:28.339 CXX test/cpp_headers/likely.o 00:03:28.339 CXX test/cpp_headers/log.o 00:03:28.596 CXX test/cpp_headers/lvol.o 00:03:28.596 LINK fdp 00:03:28.596 CXX test/cpp_headers/md5.o 00:03:28.596 CXX test/cpp_headers/memory.o 00:03:28.596 CXX test/cpp_headers/mmio.o 00:03:28.596 CXX test/cpp_headers/nbd.o 00:03:28.596 CXX test/cpp_headers/net.o 00:03:28.596 CXX test/cpp_headers/notify.o 00:03:28.596 LINK nvmf 00:03:28.596 CXX test/cpp_headers/nvme.o 00:03:28.596 CXX test/cpp_headers/nvme_intel.o 00:03:28.854 CXX test/cpp_headers/nvme_ocssd.o 00:03:28.854 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:28.854 CXX test/cpp_headers/nvme_spec.o 00:03:28.854 CXX test/cpp_headers/nvme_zns.o 00:03:28.854 CXX test/cpp_headers/nvmf_cmd.o 00:03:28.854 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:28.854 CXX test/cpp_headers/nvmf.o 00:03:28.854 CXX test/cpp_headers/nvmf_spec.o 00:03:29.112 CXX test/cpp_headers/nvmf_transport.o 00:03:29.112 CXX test/cpp_headers/opal.o 00:03:29.112 CXX test/cpp_headers/opal_spec.o 00:03:29.112 CXX test/cpp_headers/pci_ids.o 00:03:29.112 CXX test/cpp_headers/pipe.o 00:03:29.112 CXX test/cpp_headers/queue.o 00:03:29.112 CXX test/cpp_headers/reduce.o 00:03:29.112 CXX test/cpp_headers/rpc.o 00:03:29.112 CXX test/cpp_headers/scheduler.o 00:03:29.112 CXX test/cpp_headers/scsi.o 00:03:29.112 CXX test/cpp_headers/scsi_spec.o 00:03:29.370 CXX test/cpp_headers/sock.o 00:03:29.370 CXX test/cpp_headers/stdinc.o 00:03:29.370 CXX test/cpp_headers/string.o 00:03:29.370 CXX test/cpp_headers/thread.o 00:03:29.370 CXX test/cpp_headers/trace.o 00:03:29.370 CXX test/cpp_headers/trace_parser.o 00:03:29.370 CXX test/cpp_headers/tree.o 00:03:29.370 CXX test/cpp_headers/ublk.o 00:03:29.370 CXX test/cpp_headers/util.o 00:03:29.627 CXX test/cpp_headers/uuid.o 00:03:29.627 CXX test/cpp_headers/version.o 00:03:29.627 CXX test/cpp_headers/vfio_user_pci.o 00:03:29.627 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.627 CXX test/cpp_headers/vhost.o 00:03:29.627 CXX test/cpp_headers/vmd.o 00:03:29.627 CXX test/cpp_headers/xor.o 00:03:29.627 CXX test/cpp_headers/zipf.o 00:03:29.885 LINK cuse 00:03:34.068 LINK esnap 00:03:34.068 00:03:34.068 real 1m43.300s 00:03:34.068 user 9m46.875s 00:03:34.068 sys 1m42.356s 00:03:34.068 07:43:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:34.068 07:43:36 make -- common/autotest_common.sh@10 -- $ set +x 00:03:34.068 ************************************ 00:03:34.068 END TEST make 00:03:34.068 ************************************ 00:03:34.326 07:43:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:34.326 07:43:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:34.326 07:43:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:34.326 07:43:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.326 07:43:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:34.326 07:43:36 -- pm/common@44 -- $ pid=5330 00:03:34.326 07:43:36 -- pm/common@50 -- $ kill -TERM 5330 00:03:34.326 07:43:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.326 07:43:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:34.326 07:43:36 -- pm/common@44 -- $ pid=5332 00:03:34.326 07:43:36 -- pm/common@50 -- $ kill -TERM 5332 00:03:34.326 07:43:36 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:34.326 07:43:36 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:34.326 07:43:36 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:34.326 07:43:36 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:34.326 07:43:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.326 07:43:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.326 07:43:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.326 07:43:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.326 07:43:36 -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.326 07:43:36 -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.326 07:43:36 -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.326 07:43:36 -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.326 07:43:36 -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.326 07:43:36 -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.326 07:43:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.326 07:43:36 -- scripts/common.sh@344 -- # case "$op" in 00:03:34.326 07:43:36 -- scripts/common.sh@345 -- # : 1 00:03:34.326 07:43:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.326 07:43:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.326 07:43:36 -- scripts/common.sh@365 -- # decimal 1 00:03:34.326 07:43:36 -- scripts/common.sh@353 -- # local d=1 00:03:34.326 07:43:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.326 07:43:36 -- scripts/common.sh@355 -- # echo 1 00:03:34.326 07:43:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.326 07:43:36 -- scripts/common.sh@366 -- # decimal 2 00:03:34.326 07:43:36 -- scripts/common.sh@353 -- # local d=2 00:03:34.326 07:43:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.326 07:43:36 -- scripts/common.sh@355 -- # echo 2 00:03:34.326 07:43:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.326 07:43:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.326 07:43:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.326 07:43:36 -- scripts/common.sh@368 -- # return 0 00:03:34.326 07:43:36 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.326 07:43:36 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:34.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.326 --rc genhtml_branch_coverage=1 00:03:34.326 --rc genhtml_function_coverage=1 00:03:34.326 --rc genhtml_legend=1 00:03:34.326 --rc geninfo_all_blocks=1 00:03:34.326 --rc geninfo_unexecuted_blocks=1 00:03:34.326 00:03:34.326 ' 00:03:34.326 07:43:36 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:34.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.326 --rc genhtml_branch_coverage=1 00:03:34.326 --rc genhtml_function_coverage=1 00:03:34.326 --rc genhtml_legend=1 00:03:34.326 --rc geninfo_all_blocks=1 00:03:34.326 --rc geninfo_unexecuted_blocks=1 00:03:34.326 00:03:34.326 ' 00:03:34.326 07:43:36 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:34.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.326 --rc genhtml_branch_coverage=1 00:03:34.326 --rc genhtml_function_coverage=1 00:03:34.326 --rc genhtml_legend=1 00:03:34.326 --rc geninfo_all_blocks=1 00:03:34.326 --rc geninfo_unexecuted_blocks=1 00:03:34.326 00:03:34.326 ' 00:03:34.326 07:43:36 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:34.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.326 --rc genhtml_branch_coverage=1 00:03:34.326 --rc genhtml_function_coverage=1 00:03:34.326 --rc genhtml_legend=1 00:03:34.326 --rc geninfo_all_blocks=1 00:03:34.326 --rc geninfo_unexecuted_blocks=1 00:03:34.326 00:03:34.326 ' 00:03:34.326 07:43:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:34.326 07:43:36 -- nvmf/common.sh@7 -- # uname -s 00:03:34.326 07:43:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.326 07:43:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.326 07:43:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.326 07:43:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.326 07:43:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:34.326 07:43:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:34.326 07:43:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.326 07:43:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:34.326 07:43:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.326 07:43:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:34.326 07:43:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c6f65fa0-95db-4b4b-87bf-38c1f4b14e59 00:03:34.326 07:43:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=c6f65fa0-95db-4b4b-87bf-38c1f4b14e59 00:03:34.326 07:43:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.326 07:43:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:34.326 07:43:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:34.326 07:43:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.326 07:43:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:34.326 07:43:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:34.585 07:43:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.585 07:43:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.585 07:43:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.585 07:43:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.585 07:43:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.585 07:43:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.585 07:43:36 -- paths/export.sh@5 -- # export PATH 00:03:34.585 07:43:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.585 07:43:36 -- nvmf/common.sh@51 -- # : 0 00:03:34.585 07:43:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:34.585 07:43:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:34.585 07:43:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:34.585 07:43:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.585 07:43:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.585 07:43:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:34.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:34.585 07:43:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:34.585 07:43:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:34.585 07:43:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:34.585 07:43:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.585 07:43:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.585 07:43:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.585 07:43:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.585 07:43:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.585 07:43:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.585 07:43:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.585 07:43:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.585 07:43:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.585 07:43:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.585 07:43:36 -- spdk/autotest.sh@48 -- # udevadm_pid=55322 00:03:34.585 07:43:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.585 07:43:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.585 07:43:36 -- pm/common@17 -- # local monitor 00:03:34.585 07:43:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.585 07:43:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.585 07:43:36 -- pm/common@21 -- # date +%s 00:03:34.585 07:43:36 -- pm/common@25 -- # sleep 1 00:03:34.585 07:43:36 -- pm/common@21 -- # date +%s 00:03:34.585 07:43:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728459816 00:03:34.585 07:43:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728459816 00:03:34.585 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728459816_collect-cpu-load.pm.log 00:03:34.585 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728459816_collect-vmstat.pm.log 00:03:35.524 07:43:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.524 07:43:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.524 07:43:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:35.524 07:43:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.524 07:43:37 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.524 07:43:37 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:35.524 07:43:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.524 07:43:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:35.524 07:43:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:35.524 07:43:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:35.524 07:43:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:35.524 07:43:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:35.524 07:43:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.524 07:43:37 -- common/autotest_common.sh@1455 -- # uname 00:03:35.524 07:43:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:35.524 07:43:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.524 07:43:37 -- common/autotest_common.sh@1475 -- # uname 00:03:35.524 07:43:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:35.524 07:43:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:35.524 07:43:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.782 lcov: LCOV version 1.15 00:03:35.782 07:43:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:53.944 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.817 07:44:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:08.817 07:44:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.817 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:04:08.817 07:44:08 -- spdk/autotest.sh@78 -- # rm -f 00:04:08.817 07:44:08 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.817 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:08.817 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:08.817 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:08.817 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:08.817 07:44:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:08.817 07:44:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:08.817 07:44:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:08.817 07:44:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:08.817 07:44:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.817 07:44:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.817 07:44:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1c1n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1648 -- # local device=nvme1c1n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.817 07:44:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.817 07:44:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.817 07:44:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:08.817 07:44:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.817 07:44:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:04:08.817 07:44:09 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:04:08.817 07:44:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.817 07:44:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:04:08.817 07:44:09 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:04:08.817 07:44:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:08.817 07:44:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.817 07:44:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:08.817 07:44:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.817 07:44:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.817 07:44:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:08.817 07:44:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:08.817 07:44:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.817 No valid GPT data, bailing 00:04:08.817 07:44:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.817 07:44:09 -- scripts/common.sh@394 -- # pt= 00:04:08.817 07:44:09 -- scripts/common.sh@395 -- # return 1 00:04:08.817 07:44:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.817 1+0 records in 00:04:08.817 1+0 records out 00:04:08.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121936 s, 86.0 MB/s 00:04:08.817 07:44:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.817 07:44:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.817 07:44:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:08.817 07:44:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:08.817 07:44:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:08.817 No valid GPT data, bailing 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # pt= 00:04:08.817 07:44:10 -- scripts/common.sh@395 -- # return 1 00:04:08.817 07:44:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:08.817 1+0 records in 00:04:08.817 1+0 records out 00:04:08.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361692 s, 290 MB/s 00:04:08.817 07:44:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.817 07:44:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.817 07:44:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:08.817 07:44:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:08.817 07:44:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:08.817 No valid GPT data, bailing 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # pt= 00:04:08.817 07:44:10 -- scripts/common.sh@395 -- # return 1 00:04:08.817 07:44:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:08.817 1+0 records in 00:04:08.817 1+0 records out 00:04:08.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389982 s, 269 MB/s 00:04:08.817 07:44:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.817 07:44:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.817 07:44:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:08.817 07:44:10 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:08.817 07:44:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:08.817 No valid GPT data, bailing 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # pt= 00:04:08.817 07:44:10 -- scripts/common.sh@395 -- # return 1 00:04:08.817 07:44:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:08.817 1+0 records in 00:04:08.817 1+0 records out 00:04:08.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396758 s, 264 MB/s 00:04:08.817 07:44:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.817 07:44:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.817 07:44:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:08.817 07:44:10 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:08.817 07:44:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:08.817 No valid GPT data, bailing 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # pt= 00:04:08.817 07:44:10 -- scripts/common.sh@395 -- # return 1 00:04:08.817 07:44:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:08.817 1+0 records in 00:04:08.817 1+0 records out 00:04:08.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431291 s, 243 MB/s 00:04:08.817 07:44:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.817 07:44:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.817 07:44:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:08.817 07:44:10 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:08.817 07:44:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:08.817 No valid GPT data, bailing 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:08.817 07:44:10 -- scripts/common.sh@394 -- # pt= 00:04:08.817 07:44:10 -- scripts/common.sh@395 -- # return 1 00:04:08.817 07:44:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:08.817 1+0 records in 00:04:08.817 1+0 records out 00:04:08.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00373051 s, 281 MB/s 00:04:08.817 07:44:10 -- spdk/autotest.sh@105 -- # sync 00:04:08.817 07:44:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.817 07:44:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.817 07:44:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.732 07:44:12 -- spdk/autotest.sh@111 -- # uname -s 00:04:10.732 07:44:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:10.732 07:44:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:10.732 07:44:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:10.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.556 Hugepages 00:04:11.556 node hugesize free / total 00:04:11.556 node0 1048576kB 0 / 0 00:04:11.556 node0 2048kB 0 / 0 00:04:11.556 00:04:11.556 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.556 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:11.556 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:11.556 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:11.813 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:11.813 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:11.813 07:44:13 -- spdk/autotest.sh@117 -- # uname -s 00:04:11.813 07:44:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:11.813 07:44:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:11.813 07:44:13 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.945 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.945 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.945 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.945 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.945 07:44:14 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:13.878 07:44:15 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:13.878 07:44:15 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:13.878 07:44:15 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.878 07:44:15 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:13.878 07:44:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:13.878 07:44:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:13.878 07:44:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.878 07:44:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.878 07:44:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:14.136 07:44:15 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:14.136 07:44:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:14.136 07:44:15 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.651 Waiting for block devices as requested 00:04:14.651 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.651 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.651 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.909 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.177 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:20.177 07:44:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.177 07:44:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:20.177 07:44:21 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:20.177 07:44:21 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:20.177 07:44:21 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:20.177 07:44:21 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1541 -- # continue 00:04:20.177 07:44:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:20.177 07:44:21 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:20.177 07:44:21 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:20.177 07:44:21 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1541 -- # continue 00:04:20.177 07:44:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:20.177 07:44:21 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:20.177 07:44:21 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:20.177 07:44:21 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:20.177 07:44:21 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1541 -- # continue 00:04:20.177 07:44:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:20.177 07:44:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:20.177 07:44:21 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:20.178 07:44:21 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:20.178 07:44:21 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:20.178 07:44:21 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:20.178 07:44:21 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:20.178 07:44:21 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:20.178 07:44:21 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:20.178 07:44:21 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:20.178 07:44:21 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:20.178 07:44:21 -- common/autotest_common.sh@1541 -- # continue 00:04:20.178 07:44:21 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:20.178 07:44:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.178 07:44:21 -- common/autotest_common.sh@10 -- # set +x 00:04:20.178 07:44:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:20.178 07:44:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.178 07:44:21 -- common/autotest_common.sh@10 -- # set +x 00:04:20.178 07:44:21 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.370 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.370 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.371 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.371 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.371 07:44:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:21.371 07:44:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.371 07:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.371 07:44:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:21.371 07:44:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:21.371 07:44:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:21.371 07:44:23 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:21.371 07:44:23 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:21.371 07:44:23 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:21.371 07:44:23 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:21.371 07:44:23 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:21.371 07:44:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:21.371 07:44:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:21.371 07:44:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.371 07:44:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:21.371 07:44:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:21.371 07:44:23 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:21.371 07:44:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:21.371 07:44:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:21.371 07:44:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.371 07:44:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:21.371 07:44:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.371 07:44:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:21.371 07:44:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.371 07:44:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:21.371 07:44:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:21.371 07:44:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.371 07:44:23 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:21.371 07:44:23 -- common/autotest_common.sh@1570 -- # return 0 00:04:21.371 07:44:23 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:21.371 07:44:23 -- common/autotest_common.sh@1578 -- # return 0 00:04:21.371 07:44:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:21.371 07:44:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:21.371 07:44:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:21.371 07:44:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:21.371 07:44:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:21.371 07:44:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.371 07:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.371 07:44:23 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:21.371 07:44:23 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:21.371 07:44:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.371 07:44:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.371 07:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.371 ************************************ 00:04:21.371 START TEST env 00:04:21.371 ************************************ 00:04:21.371 07:44:23 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:21.630 * Looking for test storage... 00:04:21.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:21.630 07:44:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.630 07:44:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.630 07:44:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.630 07:44:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.630 07:44:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.630 07:44:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.630 07:44:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.630 07:44:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.630 07:44:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.630 07:44:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.630 07:44:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.630 07:44:23 env -- scripts/common.sh@344 -- # case "$op" in 00:04:21.630 07:44:23 env -- scripts/common.sh@345 -- # : 1 00:04:21.630 07:44:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.630 07:44:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.630 07:44:23 env -- scripts/common.sh@365 -- # decimal 1 00:04:21.630 07:44:23 env -- scripts/common.sh@353 -- # local d=1 00:04:21.630 07:44:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.630 07:44:23 env -- scripts/common.sh@355 -- # echo 1 00:04:21.630 07:44:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.630 07:44:23 env -- scripts/common.sh@366 -- # decimal 2 00:04:21.630 07:44:23 env -- scripts/common.sh@353 -- # local d=2 00:04:21.630 07:44:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.630 07:44:23 env -- scripts/common.sh@355 -- # echo 2 00:04:21.630 07:44:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.630 07:44:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.630 07:44:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.630 07:44:23 env -- scripts/common.sh@368 -- # return 0 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:21.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.630 --rc genhtml_branch_coverage=1 00:04:21.630 --rc genhtml_function_coverage=1 00:04:21.630 --rc genhtml_legend=1 00:04:21.630 --rc geninfo_all_blocks=1 00:04:21.630 --rc geninfo_unexecuted_blocks=1 00:04:21.630 00:04:21.630 ' 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:21.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.630 --rc genhtml_branch_coverage=1 00:04:21.630 --rc genhtml_function_coverage=1 00:04:21.630 --rc genhtml_legend=1 00:04:21.630 --rc geninfo_all_blocks=1 00:04:21.630 --rc geninfo_unexecuted_blocks=1 00:04:21.630 00:04:21.630 ' 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:21.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.630 --rc genhtml_branch_coverage=1 00:04:21.630 --rc genhtml_function_coverage=1 00:04:21.630 --rc genhtml_legend=1 00:04:21.630 --rc geninfo_all_blocks=1 00:04:21.630 --rc geninfo_unexecuted_blocks=1 00:04:21.630 00:04:21.630 ' 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:21.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.630 --rc genhtml_branch_coverage=1 00:04:21.630 --rc genhtml_function_coverage=1 00:04:21.630 --rc genhtml_legend=1 00:04:21.630 --rc geninfo_all_blocks=1 00:04:21.630 --rc geninfo_unexecuted_blocks=1 00:04:21.630 00:04:21.630 ' 00:04:21.630 07:44:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.630 07:44:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.630 07:44:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.630 ************************************ 00:04:21.630 START TEST env_memory 00:04:21.630 ************************************ 00:04:21.630 07:44:23 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:21.630 00:04:21.630 00:04:21.630 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.630 http://cunit.sourceforge.net/ 00:04:21.630 00:04:21.630 00:04:21.630 Suite: memory 00:04:21.888 Test: alloc and free memory map ...[2024-10-09 07:44:23.651514] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:21.888 passed 00:04:21.888 Test: mem map translation ...[2024-10-09 07:44:23.711849] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:21.888 [2024-10-09 07:44:23.711955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:21.888 [2024-10-09 07:44:23.712052] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:21.888 [2024-10-09 07:44:23.712106] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:21.888 passed 00:04:21.888 Test: mem map registration ...[2024-10-09 07:44:23.810502] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:21.888 [2024-10-09 07:44:23.810630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:21.888 passed 00:04:22.147 Test: mem map adjacent registrations ...passed 00:04:22.147 00:04:22.147 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.147 suites 1 1 n/a 0 0 00:04:22.147 tests 4 4 4 0 0 00:04:22.147 asserts 152 152 152 0 n/a 00:04:22.147 00:04:22.147 Elapsed time = 0.340 seconds 00:04:22.147 00:04:22.147 real 0m0.377s 00:04:22.147 user 0m0.349s 00:04:22.147 sys 0m0.019s 00:04:22.147 07:44:23 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.147 07:44:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:22.147 ************************************ 00:04:22.147 END TEST env_memory 00:04:22.147 ************************************ 00:04:22.147 07:44:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:22.147 07:44:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.147 07:44:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.147 07:44:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.147 ************************************ 00:04:22.147 START TEST env_vtophys 00:04:22.147 ************************************ 00:04:22.147 07:44:24 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:22.147 EAL: lib.eal log level changed from notice to debug 00:04:22.147 EAL: Detected lcore 0 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 1 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 2 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 3 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 4 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 5 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 6 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 7 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 8 as core 0 on socket 0 00:04:22.147 EAL: Detected lcore 9 as core 0 on socket 0 00:04:22.147 EAL: Maximum logical cores by configuration: 128 00:04:22.147 EAL: Detected CPU lcores: 10 00:04:22.147 EAL: Detected NUMA nodes: 1 00:04:22.147 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:22.147 EAL: Detected shared linkage of DPDK 00:04:22.147 EAL: No shared files mode enabled, IPC will be disabled 00:04:22.147 EAL: Selected IOVA mode 'PA' 00:04:22.147 EAL: Probing VFIO support... 00:04:22.147 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:22.147 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:22.147 EAL: Ask a virtual area of 0x2e000 bytes 00:04:22.147 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:22.147 EAL: Setting up physically contiguous memory... 00:04:22.147 EAL: Setting maximum number of open files to 524288 00:04:22.147 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:22.147 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:22.147 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.147 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:22.147 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.147 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.147 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:22.147 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:22.147 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.147 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:22.147 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.147 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.147 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:22.147 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:22.147 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.147 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:22.147 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.147 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.147 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:22.147 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:22.147 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.147 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:22.147 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.147 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.147 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:22.147 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:22.147 EAL: Hugepages will be freed exactly as allocated. 00:04:22.147 EAL: No shared files mode enabled, IPC is disabled 00:04:22.147 EAL: No shared files mode enabled, IPC is disabled 00:04:22.405 EAL: TSC frequency is ~2200000 KHz 00:04:22.405 EAL: Main lcore 0 is ready (tid=7f1d9b217a40;cpuset=[0]) 00:04:22.405 EAL: Trying to obtain current memory policy. 00:04:22.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.405 EAL: Restoring previous memory policy: 0 00:04:22.405 EAL: request: mp_malloc_sync 00:04:22.405 EAL: No shared files mode enabled, IPC is disabled 00:04:22.405 EAL: Heap on socket 0 was expanded by 2MB 00:04:22.405 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:22.405 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:22.405 EAL: Mem event callback 'spdk:(nil)' registered 00:04:22.405 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:22.405 00:04:22.405 00:04:22.405 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.405 http://cunit.sourceforge.net/ 00:04:22.405 00:04:22.405 00:04:22.405 Suite: components_suite 00:04:22.695 Test: vtophys_malloc_test ...passed 00:04:22.695 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:22.695 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.695 EAL: Restoring previous memory policy: 4 00:04:22.695 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.695 EAL: request: mp_malloc_sync 00:04:22.695 EAL: No shared files mode enabled, IPC is disabled 00:04:22.695 EAL: Heap on socket 0 was expanded by 4MB 00:04:22.695 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.695 EAL: request: mp_malloc_sync 00:04:22.695 EAL: No shared files mode enabled, IPC is disabled 00:04:22.695 EAL: Heap on socket 0 was shrunk by 4MB 00:04:22.695 EAL: Trying to obtain current memory policy. 00:04:22.695 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.695 EAL: Restoring previous memory policy: 4 00:04:22.695 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.695 EAL: request: mp_malloc_sync 00:04:22.695 EAL: No shared files mode enabled, IPC is disabled 00:04:22.695 EAL: Heap on socket 0 was expanded by 6MB 00:04:22.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.696 EAL: request: mp_malloc_sync 00:04:22.696 EAL: No shared files mode enabled, IPC is disabled 00:04:22.696 EAL: Heap on socket 0 was shrunk by 6MB 00:04:22.696 EAL: Trying to obtain current memory policy. 00:04:22.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.696 EAL: Restoring previous memory policy: 4 00:04:22.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.696 EAL: request: mp_malloc_sync 00:04:22.696 EAL: No shared files mode enabled, IPC is disabled 00:04:22.696 EAL: Heap on socket 0 was expanded by 10MB 00:04:22.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.696 EAL: request: mp_malloc_sync 00:04:22.696 EAL: No shared files mode enabled, IPC is disabled 00:04:22.696 EAL: Heap on socket 0 was shrunk by 10MB 00:04:22.696 EAL: Trying to obtain current memory policy. 00:04:22.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.696 EAL: Restoring previous memory policy: 4 00:04:22.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.696 EAL: request: mp_malloc_sync 00:04:22.696 EAL: No shared files mode enabled, IPC is disabled 00:04:22.696 EAL: Heap on socket 0 was expanded by 18MB 00:04:22.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.959 EAL: request: mp_malloc_sync 00:04:22.959 EAL: No shared files mode enabled, IPC is disabled 00:04:22.959 EAL: Heap on socket 0 was shrunk by 18MB 00:04:22.959 EAL: Trying to obtain current memory policy. 00:04:22.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.959 EAL: Restoring previous memory policy: 4 00:04:22.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.959 EAL: request: mp_malloc_sync 00:04:22.959 EAL: No shared files mode enabled, IPC is disabled 00:04:22.959 EAL: Heap on socket 0 was expanded by 34MB 00:04:22.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.959 EAL: request: mp_malloc_sync 00:04:22.959 EAL: No shared files mode enabled, IPC is disabled 00:04:22.959 EAL: Heap on socket 0 was shrunk by 34MB 00:04:22.959 EAL: Trying to obtain current memory policy. 00:04:22.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.959 EAL: Restoring previous memory policy: 4 00:04:22.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.959 EAL: request: mp_malloc_sync 00:04:22.959 EAL: No shared files mode enabled, IPC is disabled 00:04:22.959 EAL: Heap on socket 0 was expanded by 66MB 00:04:22.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.959 EAL: request: mp_malloc_sync 00:04:22.959 EAL: No shared files mode enabled, IPC is disabled 00:04:22.959 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.218 EAL: Trying to obtain current memory policy. 00:04:23.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.218 EAL: Restoring previous memory policy: 4 00:04:23.218 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.218 EAL: request: mp_malloc_sync 00:04:23.218 EAL: No shared files mode enabled, IPC is disabled 00:04:23.218 EAL: Heap on socket 0 was expanded by 130MB 00:04:23.477 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.477 EAL: request: mp_malloc_sync 00:04:23.477 EAL: No shared files mode enabled, IPC is disabled 00:04:23.477 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.477 EAL: Trying to obtain current memory policy. 00:04:23.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.477 EAL: Restoring previous memory policy: 4 00:04:23.477 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.477 EAL: request: mp_malloc_sync 00:04:23.477 EAL: No shared files mode enabled, IPC is disabled 00:04:23.477 EAL: Heap on socket 0 was expanded by 258MB 00:04:24.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.043 EAL: request: mp_malloc_sync 00:04:24.043 EAL: No shared files mode enabled, IPC is disabled 00:04:24.043 EAL: Heap on socket 0 was shrunk by 258MB 00:04:24.302 EAL: Trying to obtain current memory policy. 00:04:24.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.560 EAL: Restoring previous memory policy: 4 00:04:24.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.560 EAL: request: mp_malloc_sync 00:04:24.560 EAL: No shared files mode enabled, IPC is disabled 00:04:24.560 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.127 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.385 EAL: request: mp_malloc_sync 00:04:25.385 EAL: No shared files mode enabled, IPC is disabled 00:04:25.385 EAL: Heap on socket 0 was shrunk by 514MB 00:04:25.951 EAL: Trying to obtain current memory policy. 00:04:25.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.209 EAL: Restoring previous memory policy: 4 00:04:26.209 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.209 EAL: request: mp_malloc_sync 00:04:26.209 EAL: No shared files mode enabled, IPC is disabled 00:04:26.209 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.842 EAL: request: mp_malloc_sync 00:04:27.842 EAL: No shared files mode enabled, IPC is disabled 00:04:27.842 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.213 passed 00:04:29.213 00:04:29.213 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.213 suites 1 1 n/a 0 0 00:04:29.213 tests 2 2 2 0 0 00:04:29.213 asserts 5754 5754 5754 0 n/a 00:04:29.213 00:04:29.213 Elapsed time = 6.864 seconds 00:04:29.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.213 EAL: request: mp_malloc_sync 00:04:29.213 EAL: No shared files mode enabled, IPC is disabled 00:04:29.213 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.213 EAL: No shared files mode enabled, IPC is disabled 00:04:29.213 EAL: No shared files mode enabled, IPC is disabled 00:04:29.213 EAL: No shared files mode enabled, IPC is disabled 00:04:29.213 00:04:29.213 real 0m7.172s 00:04:29.213 user 0m6.316s 00:04:29.213 sys 0m0.689s 00:04:29.213 ************************************ 00:04:29.213 END TEST env_vtophys 00:04:29.213 ************************************ 00:04:29.213 07:44:31 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.213 07:44:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.213 07:44:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.213 07:44:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.213 07:44:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.213 07:44:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.213 ************************************ 00:04:29.213 START TEST env_pci 00:04:29.213 ************************************ 00:04:29.213 07:44:31 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.471 00:04:29.471 00:04:29.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.471 http://cunit.sourceforge.net/ 00:04:29.471 00:04:29.471 00:04:29.471 Suite: pci 00:04:29.471 Test: pci_hook ...[2024-10-09 07:44:31.251472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58157 has claimed it 00:04:29.471 passed 00:04:29.471 00:04:29.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.471 suites 1 1 n/a 0 0 00:04:29.471 tests 1 1 1 0 0 00:04:29.471 asserts 25 25 25 0 n/a 00:04:29.471 00:04:29.471 Elapsed time = 0.006 seconds 00:04:29.471 EAL: Cannot find device (10000:00:01.0) 00:04:29.471 EAL: Failed to attach device on primary process 00:04:29.471 ************************************ 00:04:29.471 END TEST env_pci 00:04:29.471 ************************************ 00:04:29.471 00:04:29.471 real 0m0.069s 00:04:29.471 user 0m0.028s 00:04:29.471 sys 0m0.040s 00:04:29.471 07:44:31 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.471 07:44:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.471 07:44:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.471 07:44:31 env -- env/env.sh@15 -- # uname 00:04:29.471 07:44:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.471 07:44:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.471 07:44:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.471 07:44:31 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:29.471 07:44:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.471 07:44:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.471 ************************************ 00:04:29.471 START TEST env_dpdk_post_init 00:04:29.471 ************************************ 00:04:29.471 07:44:31 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.471 EAL: Detected CPU lcores: 10 00:04:29.471 EAL: Detected NUMA nodes: 1 00:04:29.471 EAL: Detected shared linkage of DPDK 00:04:29.471 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.471 EAL: Selected IOVA mode 'PA' 00:04:29.729 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:29.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:29.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:29.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:29.729 Starting DPDK initialization... 00:04:29.729 Starting SPDK post initialization... 00:04:29.729 SPDK NVMe probe 00:04:29.729 Attaching to 0000:00:10.0 00:04:29.729 Attaching to 0000:00:11.0 00:04:29.729 Attaching to 0000:00:12.0 00:04:29.729 Attaching to 0000:00:13.0 00:04:29.729 Attached to 0000:00:10.0 00:04:29.729 Attached to 0000:00:11.0 00:04:29.729 Attached to 0000:00:13.0 00:04:29.729 Attached to 0000:00:12.0 00:04:29.729 Cleaning up... 00:04:29.729 00:04:29.729 real 0m0.291s 00:04:29.729 user 0m0.102s 00:04:29.729 sys 0m0.089s 00:04:29.729 07:44:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.729 07:44:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 ************************************ 00:04:29.729 END TEST env_dpdk_post_init 00:04:29.729 ************************************ 00:04:29.729 07:44:31 env -- env/env.sh@26 -- # uname 00:04:29.729 07:44:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.729 07:44:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.729 07:44:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.729 07:44:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.729 07:44:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 ************************************ 00:04:29.729 START TEST env_mem_callbacks 00:04:29.729 ************************************ 00:04:29.729 07:44:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.729 EAL: Detected CPU lcores: 10 00:04:29.729 EAL: Detected NUMA nodes: 1 00:04:29.729 EAL: Detected shared linkage of DPDK 00:04:29.987 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.987 EAL: Selected IOVA mode 'PA' 00:04:29.987 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.987 00:04:29.987 00:04:29.987 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.987 http://cunit.sourceforge.net/ 00:04:29.987 00:04:29.987 00:04:29.987 Suite: memory 00:04:29.987 Test: test ... 00:04:29.987 register 0x200000200000 2097152 00:04:29.987 malloc 3145728 00:04:29.987 register 0x200000400000 4194304 00:04:29.987 buf 0x2000004fffc0 len 3145728 PASSED 00:04:29.987 malloc 64 00:04:29.987 buf 0x2000004ffec0 len 64 PASSED 00:04:29.987 malloc 4194304 00:04:29.987 register 0x200000800000 6291456 00:04:29.987 buf 0x2000009fffc0 len 4194304 PASSED 00:04:29.987 free 0x2000004fffc0 3145728 00:04:29.987 free 0x2000004ffec0 64 00:04:29.987 unregister 0x200000400000 4194304 PASSED 00:04:29.987 free 0x2000009fffc0 4194304 00:04:29.987 unregister 0x200000800000 6291456 PASSED 00:04:29.987 malloc 8388608 00:04:29.987 register 0x200000400000 10485760 00:04:29.987 buf 0x2000005fffc0 len 8388608 PASSED 00:04:29.987 free 0x2000005fffc0 8388608 00:04:29.987 unregister 0x200000400000 10485760 PASSED 00:04:29.987 passed 00:04:29.987 00:04:29.987 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.987 suites 1 1 n/a 0 0 00:04:29.987 tests 1 1 1 0 0 00:04:29.987 asserts 15 15 15 0 n/a 00:04:29.987 00:04:29.987 Elapsed time = 0.059 seconds 00:04:29.987 00:04:29.987 real 0m0.252s 00:04:29.987 user 0m0.093s 00:04:29.987 sys 0m0.057s 00:04:29.987 07:44:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.987 07:44:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:29.987 ************************************ 00:04:29.987 END TEST env_mem_callbacks 00:04:29.987 ************************************ 00:04:29.987 00:04:29.987 real 0m8.654s 00:04:29.987 user 0m7.128s 00:04:29.987 sys 0m1.144s 00:04:29.987 07:44:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.987 07:44:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.987 ************************************ 00:04:29.987 END TEST env 00:04:29.987 ************************************ 00:04:30.270 07:44:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.270 07:44:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.270 07:44:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.270 07:44:32 -- common/autotest_common.sh@10 -- # set +x 00:04:30.270 ************************************ 00:04:30.270 START TEST rpc 00:04:30.270 ************************************ 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.270 * Looking for test storage... 00:04:30.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.270 07:44:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.270 07:44:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.270 07:44:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.270 07:44:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.270 07:44:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.270 07:44:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.270 07:44:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.270 07:44:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.270 07:44:32 rpc -- scripts/common.sh@345 -- # : 1 00:04:30.270 07:44:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.270 07:44:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.270 07:44:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.270 07:44:32 rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.270 07:44:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.270 07:44:32 rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.270 07:44:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.270 07:44:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.270 07:44:32 rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.270 07:44:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.270 07:44:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.270 07:44:32 rpc -- scripts/common.sh@368 -- # return 0 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.270 --rc genhtml_branch_coverage=1 00:04:30.270 --rc genhtml_function_coverage=1 00:04:30.270 --rc genhtml_legend=1 00:04:30.270 --rc geninfo_all_blocks=1 00:04:30.270 --rc geninfo_unexecuted_blocks=1 00:04:30.270 00:04:30.270 ' 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.270 --rc genhtml_branch_coverage=1 00:04:30.270 --rc genhtml_function_coverage=1 00:04:30.270 --rc genhtml_legend=1 00:04:30.270 --rc geninfo_all_blocks=1 00:04:30.270 --rc geninfo_unexecuted_blocks=1 00:04:30.270 00:04:30.270 ' 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.270 --rc genhtml_branch_coverage=1 00:04:30.270 --rc genhtml_function_coverage=1 00:04:30.270 --rc genhtml_legend=1 00:04:30.270 --rc geninfo_all_blocks=1 00:04:30.270 --rc geninfo_unexecuted_blocks=1 00:04:30.270 00:04:30.270 ' 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.270 --rc genhtml_branch_coverage=1 00:04:30.270 --rc genhtml_function_coverage=1 00:04:30.270 --rc genhtml_legend=1 00:04:30.270 --rc geninfo_all_blocks=1 00:04:30.270 --rc geninfo_unexecuted_blocks=1 00:04:30.270 00:04:30.270 ' 00:04:30.270 07:44:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58283 00:04:30.270 07:44:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.270 07:44:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58283 00:04:30.270 07:44:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@831 -- # '[' -z 58283 ']' 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.270 07:44:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.528 [2024-10-09 07:44:32.349303] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:04:30.528 [2024-10-09 07:44:32.349504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58283 ] 00:04:30.528 [2024-10-09 07:44:32.525836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.786 [2024-10-09 07:44:32.751304] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.786 [2024-10-09 07:44:32.751417] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58283' to capture a snapshot of events at runtime. 00:04:30.786 [2024-10-09 07:44:32.751438] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.786 [2024-10-09 07:44:32.751457] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.786 [2024-10-09 07:44:32.751472] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58283 for offline analysis/debug. 00:04:30.786 [2024-10-09 07:44:32.752945] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.746 07:44:33 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.746 07:44:33 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:31.746 07:44:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.746 07:44:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.746 07:44:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:31.746 07:44:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:31.746 07:44:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.746 07:44:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.746 07:44:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.746 ************************************ 00:04:31.746 START TEST rpc_integrity 00:04:31.746 ************************************ 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.746 { 00:04:31.746 "name": "Malloc0", 00:04:31.746 "aliases": [ 00:04:31.746 "faa1926f-40b7-448e-aa30-9fd9f9dda591" 00:04:31.746 ], 00:04:31.746 "product_name": "Malloc disk", 00:04:31.746 "block_size": 512, 00:04:31.746 "num_blocks": 16384, 00:04:31.746 "uuid": "faa1926f-40b7-448e-aa30-9fd9f9dda591", 00:04:31.746 "assigned_rate_limits": { 00:04:31.746 "rw_ios_per_sec": 0, 00:04:31.746 "rw_mbytes_per_sec": 0, 00:04:31.746 "r_mbytes_per_sec": 0, 00:04:31.746 "w_mbytes_per_sec": 0 00:04:31.746 }, 00:04:31.746 "claimed": false, 00:04:31.746 "zoned": false, 00:04:31.746 "supported_io_types": { 00:04:31.746 "read": true, 00:04:31.746 "write": true, 00:04:31.746 "unmap": true, 00:04:31.746 "flush": true, 00:04:31.746 "reset": true, 00:04:31.746 "nvme_admin": false, 00:04:31.746 "nvme_io": false, 00:04:31.746 "nvme_io_md": false, 00:04:31.746 "write_zeroes": true, 00:04:31.746 "zcopy": true, 00:04:31.746 "get_zone_info": false, 00:04:31.746 "zone_management": false, 00:04:31.746 "zone_append": false, 00:04:31.746 "compare": false, 00:04:31.746 "compare_and_write": false, 00:04:31.746 "abort": true, 00:04:31.746 "seek_hole": false, 00:04:31.746 "seek_data": false, 00:04:31.746 "copy": true, 00:04:31.746 "nvme_iov_md": false 00:04:31.746 }, 00:04:31.746 "memory_domains": [ 00:04:31.746 { 00:04:31.746 "dma_device_id": "system", 00:04:31.746 "dma_device_type": 1 00:04:31.746 }, 00:04:31.746 { 00:04:31.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.746 "dma_device_type": 2 00:04:31.746 } 00:04:31.746 ], 00:04:31.746 "driver_specific": {} 00:04:31.746 } 00:04:31.746 ]' 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.746 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.746 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.746 [2024-10-09 07:44:33.694977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:31.746 [2024-10-09 07:44:33.695071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.746 [2024-10-09 07:44:33.695120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:31.746 [2024-10-09 07:44:33.695141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.747 [2024-10-09 07:44:33.698015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.747 [2024-10-09 07:44:33.698073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.747 Passthru0 00:04:31.747 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.747 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.747 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.747 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.747 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.747 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.747 { 00:04:31.747 "name": "Malloc0", 00:04:31.747 "aliases": [ 00:04:31.747 "faa1926f-40b7-448e-aa30-9fd9f9dda591" 00:04:31.747 ], 00:04:31.747 "product_name": "Malloc disk", 00:04:31.747 "block_size": 512, 00:04:31.747 "num_blocks": 16384, 00:04:31.747 "uuid": "faa1926f-40b7-448e-aa30-9fd9f9dda591", 00:04:31.747 "assigned_rate_limits": { 00:04:31.747 "rw_ios_per_sec": 0, 00:04:31.747 "rw_mbytes_per_sec": 0, 00:04:31.747 "r_mbytes_per_sec": 0, 00:04:31.747 "w_mbytes_per_sec": 0 00:04:31.747 }, 00:04:31.747 "claimed": true, 00:04:31.747 "claim_type": "exclusive_write", 00:04:31.747 "zoned": false, 00:04:31.747 "supported_io_types": { 00:04:31.747 "read": true, 00:04:31.747 "write": true, 00:04:31.747 "unmap": true, 00:04:31.747 "flush": true, 00:04:31.747 "reset": true, 00:04:31.747 "nvme_admin": false, 00:04:31.747 "nvme_io": false, 00:04:31.747 "nvme_io_md": false, 00:04:31.747 "write_zeroes": true, 00:04:31.747 "zcopy": true, 00:04:31.747 "get_zone_info": false, 00:04:31.747 "zone_management": false, 00:04:31.747 "zone_append": false, 00:04:31.747 "compare": false, 00:04:31.747 "compare_and_write": false, 00:04:31.747 "abort": true, 00:04:31.747 "seek_hole": false, 00:04:31.747 "seek_data": false, 00:04:31.747 "copy": true, 00:04:31.747 "nvme_iov_md": false 00:04:31.747 }, 00:04:31.747 "memory_domains": [ 00:04:31.747 { 00:04:31.747 "dma_device_id": "system", 00:04:31.747 "dma_device_type": 1 00:04:31.747 }, 00:04:31.747 { 00:04:31.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.747 "dma_device_type": 2 00:04:31.747 } 00:04:31.747 ], 00:04:31.747 "driver_specific": {} 00:04:31.747 }, 00:04:31.747 { 00:04:31.747 "name": "Passthru0", 00:04:31.747 "aliases": [ 00:04:31.747 "da373408-12cb-5ce8-a2c3-2c78ad48848c" 00:04:31.747 ], 00:04:31.747 "product_name": "passthru", 00:04:31.747 "block_size": 512, 00:04:31.747 "num_blocks": 16384, 00:04:31.747 "uuid": "da373408-12cb-5ce8-a2c3-2c78ad48848c", 00:04:31.747 "assigned_rate_limits": { 00:04:31.747 "rw_ios_per_sec": 0, 00:04:31.747 "rw_mbytes_per_sec": 0, 00:04:31.747 "r_mbytes_per_sec": 0, 00:04:31.747 "w_mbytes_per_sec": 0 00:04:31.747 }, 00:04:31.747 "claimed": false, 00:04:31.747 "zoned": false, 00:04:31.747 "supported_io_types": { 00:04:31.747 "read": true, 00:04:31.747 "write": true, 00:04:31.747 "unmap": true, 00:04:31.747 "flush": true, 00:04:31.747 "reset": true, 00:04:31.747 "nvme_admin": false, 00:04:31.747 "nvme_io": false, 00:04:31.747 "nvme_io_md": false, 00:04:31.747 "write_zeroes": true, 00:04:31.747 "zcopy": true, 00:04:31.747 "get_zone_info": false, 00:04:31.747 "zone_management": false, 00:04:31.747 "zone_append": false, 00:04:31.747 "compare": false, 00:04:31.747 "compare_and_write": false, 00:04:31.747 "abort": true, 00:04:31.747 "seek_hole": false, 00:04:31.747 "seek_data": false, 00:04:31.747 "copy": true, 00:04:31.747 "nvme_iov_md": false 00:04:31.747 }, 00:04:31.747 "memory_domains": [ 00:04:31.747 { 00:04:31.747 "dma_device_id": "system", 00:04:31.747 "dma_device_type": 1 00:04:31.747 }, 00:04:31.747 { 00:04:31.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.747 "dma_device_type": 2 00:04:31.747 } 00:04:31.747 ], 00:04:31.747 "driver_specific": { 00:04:31.747 "passthru": { 00:04:31.747 "name": "Passthru0", 00:04:31.747 "base_bdev_name": "Malloc0" 00:04:31.747 } 00:04:31.747 } 00:04:31.747 } 00:04:31.747 ]' 00:04:31.747 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.005 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.005 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.005 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.005 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.005 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.005 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.005 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.005 07:44:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.005 00:04:32.006 real 0m0.352s 00:04:32.006 user 0m0.206s 00:04:32.006 sys 0m0.050s 00:04:32.006 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.006 ************************************ 00:04:32.006 07:44:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.006 END TEST rpc_integrity 00:04:32.006 ************************************ 00:04:32.006 07:44:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:32.006 07:44:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.006 07:44:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.006 07:44:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.006 ************************************ 00:04:32.006 START TEST rpc_plugins 00:04:32.006 ************************************ 00:04:32.006 07:44:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:32.006 07:44:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:32.006 07:44:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.006 07:44:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.006 07:44:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.006 07:44:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:32.006 07:44:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:32.006 07:44:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.006 07:44:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.006 07:44:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.006 07:44:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:32.006 { 00:04:32.006 "name": "Malloc1", 00:04:32.006 "aliases": [ 00:04:32.006 "832d24d8-b385-40a0-b26c-088f96759cda" 00:04:32.006 ], 00:04:32.006 "product_name": "Malloc disk", 00:04:32.006 "block_size": 4096, 00:04:32.006 "num_blocks": 256, 00:04:32.006 "uuid": "832d24d8-b385-40a0-b26c-088f96759cda", 00:04:32.006 "assigned_rate_limits": { 00:04:32.006 "rw_ios_per_sec": 0, 00:04:32.006 "rw_mbytes_per_sec": 0, 00:04:32.006 "r_mbytes_per_sec": 0, 00:04:32.006 "w_mbytes_per_sec": 0 00:04:32.006 }, 00:04:32.006 "claimed": false, 00:04:32.006 "zoned": false, 00:04:32.006 "supported_io_types": { 00:04:32.006 "read": true, 00:04:32.006 "write": true, 00:04:32.006 "unmap": true, 00:04:32.006 "flush": true, 00:04:32.006 "reset": true, 00:04:32.006 "nvme_admin": false, 00:04:32.006 "nvme_io": false, 00:04:32.006 "nvme_io_md": false, 00:04:32.006 "write_zeroes": true, 00:04:32.006 "zcopy": true, 00:04:32.006 "get_zone_info": false, 00:04:32.006 "zone_management": false, 00:04:32.006 "zone_append": false, 00:04:32.006 "compare": false, 00:04:32.006 "compare_and_write": false, 00:04:32.006 "abort": true, 00:04:32.006 "seek_hole": false, 00:04:32.006 "seek_data": false, 00:04:32.006 "copy": true, 00:04:32.006 "nvme_iov_md": false 00:04:32.006 }, 00:04:32.006 "memory_domains": [ 00:04:32.006 { 00:04:32.006 "dma_device_id": "system", 00:04:32.006 "dma_device_type": 1 00:04:32.006 }, 00:04:32.006 { 00:04:32.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.006 "dma_device_type": 2 00:04:32.006 } 00:04:32.006 ], 00:04:32.006 "driver_specific": {} 00:04:32.006 } 00:04:32.006 ]' 00:04:32.006 07:44:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:32.265 07:44:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.265 07:44:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.265 07:44:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.265 07:44:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.265 07:44:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:32.265 07:44:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.265 00:04:32.265 real 0m0.159s 00:04:32.265 user 0m0.103s 00:04:32.265 sys 0m0.018s 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.265 07:44:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.265 ************************************ 00:04:32.265 END TEST rpc_plugins 00:04:32.265 ************************************ 00:04:32.265 07:44:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.265 07:44:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.265 07:44:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.265 07:44:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.265 ************************************ 00:04:32.265 START TEST rpc_trace_cmd_test 00:04:32.265 ************************************ 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:32.265 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58283", 00:04:32.265 "tpoint_group_mask": "0x8", 00:04:32.265 "iscsi_conn": { 00:04:32.265 "mask": "0x2", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "scsi": { 00:04:32.265 "mask": "0x4", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "bdev": { 00:04:32.265 "mask": "0x8", 00:04:32.265 "tpoint_mask": "0xffffffffffffffff" 00:04:32.265 }, 00:04:32.265 "nvmf_rdma": { 00:04:32.265 "mask": "0x10", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "nvmf_tcp": { 00:04:32.265 "mask": "0x20", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "ftl": { 00:04:32.265 "mask": "0x40", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "blobfs": { 00:04:32.265 "mask": "0x80", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "dsa": { 00:04:32.265 "mask": "0x200", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "thread": { 00:04:32.265 "mask": "0x400", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "nvme_pcie": { 00:04:32.265 "mask": "0x800", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "iaa": { 00:04:32.265 "mask": "0x1000", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "nvme_tcp": { 00:04:32.265 "mask": "0x2000", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "bdev_nvme": { 00:04:32.265 "mask": "0x4000", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "sock": { 00:04:32.265 "mask": "0x8000", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "blob": { 00:04:32.265 "mask": "0x10000", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "bdev_raid": { 00:04:32.265 "mask": "0x20000", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 }, 00:04:32.265 "scheduler": { 00:04:32.265 "mask": "0x40000", 00:04:32.265 "tpoint_mask": "0x0" 00:04:32.265 } 00:04:32.265 }' 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.265 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.523 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.523 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.523 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.523 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.523 07:44:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.523 00:04:32.523 real 0m0.263s 00:04:32.523 user 0m0.226s 00:04:32.523 sys 0m0.026s 00:04:32.523 07:44:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.523 ************************************ 00:04:32.523 07:44:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.523 END TEST rpc_trace_cmd_test 00:04:32.523 ************************************ 00:04:32.523 07:44:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.523 07:44:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.523 07:44:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.523 07:44:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.523 07:44:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.523 07:44:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.523 ************************************ 00:04:32.523 START TEST rpc_daemon_integrity 00:04:32.523 ************************************ 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.523 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.782 { 00:04:32.782 "name": "Malloc2", 00:04:32.782 "aliases": [ 00:04:32.782 "50b640e0-6fc9-4754-83a4-5d8f68f2cead" 00:04:32.782 ], 00:04:32.782 "product_name": "Malloc disk", 00:04:32.782 "block_size": 512, 00:04:32.782 "num_blocks": 16384, 00:04:32.782 "uuid": "50b640e0-6fc9-4754-83a4-5d8f68f2cead", 00:04:32.782 "assigned_rate_limits": { 00:04:32.782 "rw_ios_per_sec": 0, 00:04:32.782 "rw_mbytes_per_sec": 0, 00:04:32.782 "r_mbytes_per_sec": 0, 00:04:32.782 "w_mbytes_per_sec": 0 00:04:32.782 }, 00:04:32.782 "claimed": false, 00:04:32.782 "zoned": false, 00:04:32.782 "supported_io_types": { 00:04:32.782 "read": true, 00:04:32.782 "write": true, 00:04:32.782 "unmap": true, 00:04:32.782 "flush": true, 00:04:32.782 "reset": true, 00:04:32.782 "nvme_admin": false, 00:04:32.782 "nvme_io": false, 00:04:32.782 "nvme_io_md": false, 00:04:32.782 "write_zeroes": true, 00:04:32.782 "zcopy": true, 00:04:32.782 "get_zone_info": false, 00:04:32.782 "zone_management": false, 00:04:32.782 "zone_append": false, 00:04:32.782 "compare": false, 00:04:32.782 "compare_and_write": false, 00:04:32.782 "abort": true, 00:04:32.782 "seek_hole": false, 00:04:32.782 "seek_data": false, 00:04:32.782 "copy": true, 00:04:32.782 "nvme_iov_md": false 00:04:32.782 }, 00:04:32.782 "memory_domains": [ 00:04:32.782 { 00:04:32.782 "dma_device_id": "system", 00:04:32.782 "dma_device_type": 1 00:04:32.782 }, 00:04:32.782 { 00:04:32.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.782 "dma_device_type": 2 00:04:32.782 } 00:04:32.782 ], 00:04:32.782 "driver_specific": {} 00:04:32.782 } 00:04:32.782 ]' 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.782 [2024-10-09 07:44:34.606792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.782 [2024-10-09 07:44:34.606884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.782 [2024-10-09 07:44:34.606920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:32.782 [2024-10-09 07:44:34.606938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.782 [2024-10-09 07:44:34.609762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.782 [2024-10-09 07:44:34.609817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.782 Passthru0 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.782 { 00:04:32.782 "name": "Malloc2", 00:04:32.782 "aliases": [ 00:04:32.782 "50b640e0-6fc9-4754-83a4-5d8f68f2cead" 00:04:32.782 ], 00:04:32.782 "product_name": "Malloc disk", 00:04:32.782 "block_size": 512, 00:04:32.782 "num_blocks": 16384, 00:04:32.782 "uuid": "50b640e0-6fc9-4754-83a4-5d8f68f2cead", 00:04:32.782 "assigned_rate_limits": { 00:04:32.782 "rw_ios_per_sec": 0, 00:04:32.782 "rw_mbytes_per_sec": 0, 00:04:32.782 "r_mbytes_per_sec": 0, 00:04:32.782 "w_mbytes_per_sec": 0 00:04:32.782 }, 00:04:32.782 "claimed": true, 00:04:32.782 "claim_type": "exclusive_write", 00:04:32.782 "zoned": false, 00:04:32.782 "supported_io_types": { 00:04:32.782 "read": true, 00:04:32.782 "write": true, 00:04:32.782 "unmap": true, 00:04:32.782 "flush": true, 00:04:32.782 "reset": true, 00:04:32.782 "nvme_admin": false, 00:04:32.782 "nvme_io": false, 00:04:32.782 "nvme_io_md": false, 00:04:32.782 "write_zeroes": true, 00:04:32.782 "zcopy": true, 00:04:32.782 "get_zone_info": false, 00:04:32.782 "zone_management": false, 00:04:32.782 "zone_append": false, 00:04:32.782 "compare": false, 00:04:32.782 "compare_and_write": false, 00:04:32.782 "abort": true, 00:04:32.782 "seek_hole": false, 00:04:32.782 "seek_data": false, 00:04:32.782 "copy": true, 00:04:32.782 "nvme_iov_md": false 00:04:32.782 }, 00:04:32.782 "memory_domains": [ 00:04:32.782 { 00:04:32.782 "dma_device_id": "system", 00:04:32.782 "dma_device_type": 1 00:04:32.782 }, 00:04:32.782 { 00:04:32.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.782 "dma_device_type": 2 00:04:32.782 } 00:04:32.782 ], 00:04:32.782 "driver_specific": {} 00:04:32.782 }, 00:04:32.782 { 00:04:32.782 "name": "Passthru0", 00:04:32.782 "aliases": [ 00:04:32.782 "a57384ba-b004-598a-bff7-d232004bb037" 00:04:32.782 ], 00:04:32.782 "product_name": "passthru", 00:04:32.782 "block_size": 512, 00:04:32.782 "num_blocks": 16384, 00:04:32.782 "uuid": "a57384ba-b004-598a-bff7-d232004bb037", 00:04:32.782 "assigned_rate_limits": { 00:04:32.782 "rw_ios_per_sec": 0, 00:04:32.782 "rw_mbytes_per_sec": 0, 00:04:32.782 "r_mbytes_per_sec": 0, 00:04:32.782 "w_mbytes_per_sec": 0 00:04:32.782 }, 00:04:32.782 "claimed": false, 00:04:32.782 "zoned": false, 00:04:32.782 "supported_io_types": { 00:04:32.782 "read": true, 00:04:32.782 "write": true, 00:04:32.782 "unmap": true, 00:04:32.782 "flush": true, 00:04:32.782 "reset": true, 00:04:32.782 "nvme_admin": false, 00:04:32.782 "nvme_io": false, 00:04:32.782 "nvme_io_md": false, 00:04:32.782 "write_zeroes": true, 00:04:32.782 "zcopy": true, 00:04:32.782 "get_zone_info": false, 00:04:32.782 "zone_management": false, 00:04:32.782 "zone_append": false, 00:04:32.782 "compare": false, 00:04:32.782 "compare_and_write": false, 00:04:32.782 "abort": true, 00:04:32.782 "seek_hole": false, 00:04:32.782 "seek_data": false, 00:04:32.782 "copy": true, 00:04:32.782 "nvme_iov_md": false 00:04:32.782 }, 00:04:32.782 "memory_domains": [ 00:04:32.782 { 00:04:32.782 "dma_device_id": "system", 00:04:32.782 "dma_device_type": 1 00:04:32.782 }, 00:04:32.782 { 00:04:32.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.782 "dma_device_type": 2 00:04:32.782 } 00:04:32.782 ], 00:04:32.782 "driver_specific": { 00:04:32.782 "passthru": { 00:04:32.782 "name": "Passthru0", 00:04:32.782 "base_bdev_name": "Malloc2" 00:04:32.782 } 00:04:32.782 } 00:04:32.782 } 00:04:32.782 ]' 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.782 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.041 07:44:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.041 00:04:33.041 real 0m0.340s 00:04:33.041 user 0m0.211s 00:04:33.041 sys 0m0.036s 00:04:33.041 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.041 ************************************ 00:04:33.041 07:44:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.041 END TEST rpc_daemon_integrity 00:04:33.041 ************************************ 00:04:33.041 07:44:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:33.041 07:44:34 rpc -- rpc/rpc.sh@84 -- # killprocess 58283 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@950 -- # '[' -z 58283 ']' 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@954 -- # kill -0 58283 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@955 -- # uname 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58283 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.041 killing process with pid 58283 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58283' 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@969 -- # kill 58283 00:04:33.041 07:44:34 rpc -- common/autotest_common.sh@974 -- # wait 58283 00:04:35.611 00:04:35.611 real 0m5.074s 00:04:35.611 user 0m5.912s 00:04:35.611 sys 0m0.744s 00:04:35.611 07:44:37 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.611 07:44:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.611 ************************************ 00:04:35.611 END TEST rpc 00:04:35.611 ************************************ 00:04:35.611 07:44:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.611 07:44:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.611 07:44:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.611 07:44:37 -- common/autotest_common.sh@10 -- # set +x 00:04:35.611 ************************************ 00:04:35.611 START TEST skip_rpc 00:04:35.611 ************************************ 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.611 * Looking for test storage... 00:04:35.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.611 07:44:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.611 --rc genhtml_branch_coverage=1 00:04:35.611 --rc genhtml_function_coverage=1 00:04:35.611 --rc genhtml_legend=1 00:04:35.611 --rc geninfo_all_blocks=1 00:04:35.611 --rc geninfo_unexecuted_blocks=1 00:04:35.611 00:04:35.611 ' 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.611 --rc genhtml_branch_coverage=1 00:04:35.611 --rc genhtml_function_coverage=1 00:04:35.611 --rc genhtml_legend=1 00:04:35.611 --rc geninfo_all_blocks=1 00:04:35.611 --rc geninfo_unexecuted_blocks=1 00:04:35.611 00:04:35.611 ' 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.611 --rc genhtml_branch_coverage=1 00:04:35.611 --rc genhtml_function_coverage=1 00:04:35.611 --rc genhtml_legend=1 00:04:35.611 --rc geninfo_all_blocks=1 00:04:35.611 --rc geninfo_unexecuted_blocks=1 00:04:35.611 00:04:35.611 ' 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.611 --rc genhtml_branch_coverage=1 00:04:35.611 --rc genhtml_function_coverage=1 00:04:35.611 --rc genhtml_legend=1 00:04:35.611 --rc geninfo_all_blocks=1 00:04:35.611 --rc geninfo_unexecuted_blocks=1 00:04:35.611 00:04:35.611 ' 00:04:35.611 07:44:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.611 07:44:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.611 07:44:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.611 07:44:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.611 ************************************ 00:04:35.611 START TEST skip_rpc 00:04:35.611 ************************************ 00:04:35.611 07:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:35.611 07:44:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58508 00:04:35.611 07:44:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.611 07:44:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.611 07:44:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.611 [2024-10-09 07:44:37.463143] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:04:35.611 [2024-10-09 07:44:37.463546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58508 ] 00:04:35.869 [2024-10-09 07:44:37.635819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.869 [2024-10-09 07:44:37.871703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58508 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58508 ']' 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58508 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58508 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58508' 00:04:41.164 killing process with pid 58508 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58508 00:04:41.164 07:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58508 00:04:43.068 00:04:43.068 ************************************ 00:04:43.068 END TEST skip_rpc 00:04:43.068 ************************************ 00:04:43.068 real 0m7.280s 00:04:43.068 user 0m6.828s 00:04:43.068 sys 0m0.343s 00:04:43.068 07:44:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.068 07:44:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.068 07:44:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:43.068 07:44:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.068 07:44:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.068 07:44:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.068 ************************************ 00:04:43.068 START TEST skip_rpc_with_json 00:04:43.068 ************************************ 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58616 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58616 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58616 ']' 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.068 07:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.068 [2024-10-09 07:44:44.793256] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:04:43.068 [2024-10-09 07:44:44.793571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:04:43.068 [2024-10-09 07:44:44.963450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.326 [2024-10-09 07:44:45.175059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.263 [2024-10-09 07:44:45.949895] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:44.263 request: 00:04:44.263 { 00:04:44.263 "trtype": "tcp", 00:04:44.263 "method": "nvmf_get_transports", 00:04:44.263 "req_id": 1 00:04:44.263 } 00:04:44.263 Got JSON-RPC error response 00:04:44.263 response: 00:04:44.263 { 00:04:44.263 "code": -19, 00:04:44.263 "message": "No such device" 00:04:44.263 } 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.263 [2024-10-09 07:44:45.962045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.263 07:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.263 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.263 07:44:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.263 { 00:04:44.263 "subsystems": [ 00:04:44.263 { 00:04:44.263 "subsystem": "fsdev", 00:04:44.263 "config": [ 00:04:44.263 { 00:04:44.263 "method": "fsdev_set_opts", 00:04:44.263 "params": { 00:04:44.263 "fsdev_io_pool_size": 65535, 00:04:44.263 "fsdev_io_cache_size": 256 00:04:44.263 } 00:04:44.263 } 00:04:44.263 ] 00:04:44.263 }, 00:04:44.263 { 00:04:44.263 "subsystem": "keyring", 00:04:44.263 "config": [] 00:04:44.263 }, 00:04:44.263 { 00:04:44.263 "subsystem": "iobuf", 00:04:44.264 "config": [ 00:04:44.264 { 00:04:44.264 "method": "iobuf_set_options", 00:04:44.264 "params": { 00:04:44.264 "small_pool_count": 8192, 00:04:44.264 "large_pool_count": 1024, 00:04:44.264 "small_bufsize": 8192, 00:04:44.264 "large_bufsize": 135168 00:04:44.264 } 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "sock", 00:04:44.264 "config": [ 00:04:44.264 { 00:04:44.264 "method": "sock_set_default_impl", 00:04:44.264 "params": { 00:04:44.264 "impl_name": "posix" 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "sock_impl_set_options", 00:04:44.264 "params": { 00:04:44.264 "impl_name": "ssl", 00:04:44.264 "recv_buf_size": 4096, 00:04:44.264 "send_buf_size": 4096, 00:04:44.264 "enable_recv_pipe": true, 00:04:44.264 "enable_quickack": false, 00:04:44.264 "enable_placement_id": 0, 00:04:44.264 "enable_zerocopy_send_server": true, 00:04:44.264 "enable_zerocopy_send_client": false, 00:04:44.264 "zerocopy_threshold": 0, 00:04:44.264 "tls_version": 0, 00:04:44.264 "enable_ktls": false 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "sock_impl_set_options", 00:04:44.264 "params": { 00:04:44.264 "impl_name": "posix", 00:04:44.264 "recv_buf_size": 2097152, 00:04:44.264 "send_buf_size": 2097152, 00:04:44.264 "enable_recv_pipe": true, 00:04:44.264 "enable_quickack": false, 00:04:44.264 "enable_placement_id": 0, 00:04:44.264 "enable_zerocopy_send_server": true, 00:04:44.264 "enable_zerocopy_send_client": false, 00:04:44.264 "zerocopy_threshold": 0, 00:04:44.264 "tls_version": 0, 00:04:44.264 "enable_ktls": false 00:04:44.264 } 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "vmd", 00:04:44.264 "config": [] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "accel", 00:04:44.264 "config": [ 00:04:44.264 { 00:04:44.264 "method": "accel_set_options", 00:04:44.264 "params": { 00:04:44.264 "small_cache_size": 128, 00:04:44.264 "large_cache_size": 16, 00:04:44.264 "task_count": 2048, 00:04:44.264 "sequence_count": 2048, 00:04:44.264 "buf_count": 2048 00:04:44.264 } 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "bdev", 00:04:44.264 "config": [ 00:04:44.264 { 00:04:44.264 "method": "bdev_set_options", 00:04:44.264 "params": { 00:04:44.264 "bdev_io_pool_size": 65535, 00:04:44.264 "bdev_io_cache_size": 256, 00:04:44.264 "bdev_auto_examine": true, 00:04:44.264 "iobuf_small_cache_size": 128, 00:04:44.264 "iobuf_large_cache_size": 16 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "bdev_raid_set_options", 00:04:44.264 "params": { 00:04:44.264 "process_window_size_kb": 1024, 00:04:44.264 "process_max_bandwidth_mb_sec": 0 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "bdev_iscsi_set_options", 00:04:44.264 "params": { 00:04:44.264 "timeout_sec": 30 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "bdev_nvme_set_options", 00:04:44.264 "params": { 00:04:44.264 "action_on_timeout": "none", 00:04:44.264 "timeout_us": 0, 00:04:44.264 "timeout_admin_us": 0, 00:04:44.264 "keep_alive_timeout_ms": 10000, 00:04:44.264 "arbitration_burst": 0, 00:04:44.264 "low_priority_weight": 0, 00:04:44.264 "medium_priority_weight": 0, 00:04:44.264 "high_priority_weight": 0, 00:04:44.264 "nvme_adminq_poll_period_us": 10000, 00:04:44.264 "nvme_ioq_poll_period_us": 0, 00:04:44.264 "io_queue_requests": 0, 00:04:44.264 "delay_cmd_submit": true, 00:04:44.264 "transport_retry_count": 4, 00:04:44.264 "bdev_retry_count": 3, 00:04:44.264 "transport_ack_timeout": 0, 00:04:44.264 "ctrlr_loss_timeout_sec": 0, 00:04:44.264 "reconnect_delay_sec": 0, 00:04:44.264 "fast_io_fail_timeout_sec": 0, 00:04:44.264 "disable_auto_failback": false, 00:04:44.264 "generate_uuids": false, 00:04:44.264 "transport_tos": 0, 00:04:44.264 "nvme_error_stat": false, 00:04:44.264 "rdma_srq_size": 0, 00:04:44.264 "io_path_stat": false, 00:04:44.264 "allow_accel_sequence": false, 00:04:44.264 "rdma_max_cq_size": 0, 00:04:44.264 "rdma_cm_event_timeout_ms": 0, 00:04:44.264 "dhchap_digests": [ 00:04:44.264 "sha256", 00:04:44.264 "sha384", 00:04:44.264 "sha512" 00:04:44.264 ], 00:04:44.264 "dhchap_dhgroups": [ 00:04:44.264 "null", 00:04:44.264 "ffdhe2048", 00:04:44.264 "ffdhe3072", 00:04:44.264 "ffdhe4096", 00:04:44.264 "ffdhe6144", 00:04:44.264 "ffdhe8192" 00:04:44.264 ] 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "bdev_nvme_set_hotplug", 00:04:44.264 "params": { 00:04:44.264 "period_us": 100000, 00:04:44.264 "enable": false 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "bdev_wait_for_examine" 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "scsi", 00:04:44.264 "config": null 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "scheduler", 00:04:44.264 "config": [ 00:04:44.264 { 00:04:44.264 "method": "framework_set_scheduler", 00:04:44.264 "params": { 00:04:44.264 "name": "static" 00:04:44.264 } 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "vhost_scsi", 00:04:44.264 "config": [] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "vhost_blk", 00:04:44.264 "config": [] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "ublk", 00:04:44.264 "config": [] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "nbd", 00:04:44.264 "config": [] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "nvmf", 00:04:44.264 "config": [ 00:04:44.264 { 00:04:44.264 "method": "nvmf_set_config", 00:04:44.264 "params": { 00:04:44.264 "discovery_filter": "match_any", 00:04:44.264 "admin_cmd_passthru": { 00:04:44.264 "identify_ctrlr": false 00:04:44.264 }, 00:04:44.264 "dhchap_digests": [ 00:04:44.264 "sha256", 00:04:44.264 "sha384", 00:04:44.264 "sha512" 00:04:44.264 ], 00:04:44.264 "dhchap_dhgroups": [ 00:04:44.264 "null", 00:04:44.264 "ffdhe2048", 00:04:44.264 "ffdhe3072", 00:04:44.264 "ffdhe4096", 00:04:44.264 "ffdhe6144", 00:04:44.264 "ffdhe8192" 00:04:44.264 ] 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "nvmf_set_max_subsystems", 00:04:44.264 "params": { 00:04:44.264 "max_subsystems": 1024 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "nvmf_set_crdt", 00:04:44.264 "params": { 00:04:44.264 "crdt1": 0, 00:04:44.264 "crdt2": 0, 00:04:44.264 "crdt3": 0 00:04:44.264 } 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "method": "nvmf_create_transport", 00:04:44.264 "params": { 00:04:44.264 "trtype": "TCP", 00:04:44.264 "max_queue_depth": 128, 00:04:44.264 "max_io_qpairs_per_ctrlr": 127, 00:04:44.264 "in_capsule_data_size": 4096, 00:04:44.264 "max_io_size": 131072, 00:04:44.264 "io_unit_size": 131072, 00:04:44.264 "max_aq_depth": 128, 00:04:44.264 "num_shared_buffers": 511, 00:04:44.264 "buf_cache_size": 4294967295, 00:04:44.264 "dif_insert_or_strip": false, 00:04:44.264 "zcopy": false, 00:04:44.264 "c2h_success": true, 00:04:44.264 "sock_priority": 0, 00:04:44.264 "abort_timeout_sec": 1, 00:04:44.264 "ack_timeout": 0, 00:04:44.264 "data_wr_pool_size": 0 00:04:44.264 } 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 }, 00:04:44.264 { 00:04:44.264 "subsystem": "iscsi", 00:04:44.264 "config": [ 00:04:44.264 { 00:04:44.264 "method": "iscsi_set_options", 00:04:44.264 "params": { 00:04:44.264 "node_base": "iqn.2016-06.io.spdk", 00:04:44.264 "max_sessions": 128, 00:04:44.264 "max_connections_per_session": 2, 00:04:44.264 "max_queue_depth": 64, 00:04:44.264 "default_time2wait": 2, 00:04:44.264 "default_time2retain": 20, 00:04:44.264 "first_burst_length": 8192, 00:04:44.264 "immediate_data": true, 00:04:44.264 "allow_duplicated_isid": false, 00:04:44.264 "error_recovery_level": 0, 00:04:44.264 "nop_timeout": 60, 00:04:44.264 "nop_in_interval": 30, 00:04:44.264 "disable_chap": false, 00:04:44.264 "require_chap": false, 00:04:44.264 "mutual_chap": false, 00:04:44.264 "chap_group": 0, 00:04:44.264 "max_large_datain_per_connection": 64, 00:04:44.264 "max_r2t_per_connection": 4, 00:04:44.264 "pdu_pool_size": 36864, 00:04:44.264 "immediate_data_pool_size": 16384, 00:04:44.264 "data_out_pool_size": 2048 00:04:44.264 } 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 } 00:04:44.264 ] 00:04:44.264 } 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58616 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58616 ']' 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58616 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58616 00:04:44.264 killing process with pid 58616 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.264 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.265 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58616' 00:04:44.265 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58616 00:04:44.265 07:44:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58616 00:04:46.794 07:44:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58668 00:04:46.794 07:44:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.794 07:44:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58668 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58668 ']' 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58668 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58668 00:04:52.064 killing process with pid 58668 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58668' 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58668 00:04:52.064 07:44:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58668 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.965 ************************************ 00:04:53.965 END TEST skip_rpc_with_json 00:04:53.965 ************************************ 00:04:53.965 00:04:53.965 real 0m10.998s 00:04:53.965 user 0m10.649s 00:04:53.965 sys 0m0.787s 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.965 07:44:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.965 07:44:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.965 07:44:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.965 07:44:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.965 ************************************ 00:04:53.965 START TEST skip_rpc_with_delay 00:04:53.965 ************************************ 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.965 [2024-10-09 07:44:55.872788] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.965 [2024-10-09 07:44:55.873001] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.965 00:04:53.965 real 0m0.214s 00:04:53.965 user 0m0.123s 00:04:53.965 sys 0m0.088s 00:04:53.965 ************************************ 00:04:53.965 END TEST skip_rpc_with_delay 00:04:53.965 ************************************ 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.965 07:44:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 07:44:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.226 07:44:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.226 07:44:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.226 07:44:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.226 07:44:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.226 07:44:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 ************************************ 00:04:54.226 START TEST exit_on_failed_rpc_init 00:04:54.226 ************************************ 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:54.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58796 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58796 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58796 ']' 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.226 07:44:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 [2024-10-09 07:44:56.117445] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:04:54.226 [2024-10-09 07:44:56.118235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58796 ] 00:04:54.484 [2024-10-09 07:44:56.282232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.484 [2024-10-09 07:44:56.468809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:55.420 07:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.420 [2024-10-09 07:44:57.400900] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:04:55.420 [2024-10-09 07:44:57.401059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58825 ] 00:04:55.679 [2024-10-09 07:44:57.567543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.938 [2024-10-09 07:44:57.752639] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.938 [2024-10-09 07:44:57.752761] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.938 [2024-10-09 07:44:57.752783] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.938 [2024-10-09 07:44:57.752799] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58796 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58796 ']' 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58796 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.199 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58796 00:04:56.470 killing process with pid 58796 00:04:56.470 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.470 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.470 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58796' 00:04:56.470 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58796 00:04:56.470 07:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58796 00:04:59.000 ************************************ 00:04:59.000 END TEST exit_on_failed_rpc_init 00:04:59.000 ************************************ 00:04:59.000 00:04:59.000 real 0m4.453s 00:04:59.000 user 0m5.213s 00:04:59.000 sys 0m0.556s 00:04:59.000 07:45:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.000 07:45:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.000 07:45:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.000 00:04:59.000 real 0m23.343s 00:04:59.000 user 0m23.007s 00:04:59.000 sys 0m1.968s 00:04:59.000 07:45:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.000 07:45:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.000 ************************************ 00:04:59.000 END TEST skip_rpc 00:04:59.000 ************************************ 00:04:59.000 07:45:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:59.000 07:45:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.000 07:45:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.000 07:45:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.000 ************************************ 00:04:59.001 START TEST rpc_client 00:04:59.001 ************************************ 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:59.001 * Looking for test storage... 00:04:59.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.001 07:45:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.001 --rc genhtml_branch_coverage=1 00:04:59.001 --rc genhtml_function_coverage=1 00:04:59.001 --rc genhtml_legend=1 00:04:59.001 --rc geninfo_all_blocks=1 00:04:59.001 --rc geninfo_unexecuted_blocks=1 00:04:59.001 00:04:59.001 ' 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.001 --rc genhtml_branch_coverage=1 00:04:59.001 --rc genhtml_function_coverage=1 00:04:59.001 --rc genhtml_legend=1 00:04:59.001 --rc geninfo_all_blocks=1 00:04:59.001 --rc geninfo_unexecuted_blocks=1 00:04:59.001 00:04:59.001 ' 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.001 --rc genhtml_branch_coverage=1 00:04:59.001 --rc genhtml_function_coverage=1 00:04:59.001 --rc genhtml_legend=1 00:04:59.001 --rc geninfo_all_blocks=1 00:04:59.001 --rc geninfo_unexecuted_blocks=1 00:04:59.001 00:04:59.001 ' 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.001 --rc genhtml_branch_coverage=1 00:04:59.001 --rc genhtml_function_coverage=1 00:04:59.001 --rc genhtml_legend=1 00:04:59.001 --rc geninfo_all_blocks=1 00:04:59.001 --rc geninfo_unexecuted_blocks=1 00:04:59.001 00:04:59.001 ' 00:04:59.001 07:45:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:59.001 OK 00:04:59.001 07:45:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.001 00:04:59.001 real 0m0.239s 00:04:59.001 user 0m0.140s 00:04:59.001 sys 0m0.108s 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.001 07:45:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.001 ************************************ 00:04:59.001 END TEST rpc_client 00:04:59.001 ************************************ 00:04:59.001 07:45:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:59.001 07:45:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.001 07:45:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.001 07:45:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.001 ************************************ 00:04:59.001 START TEST json_config 00:04:59.001 ************************************ 00:04:59.001 07:45:00 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:59.001 07:45:00 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.001 07:45:00 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.001 07:45:00 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.001 07:45:00 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.001 07:45:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.001 07:45:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.001 07:45:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.001 07:45:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.001 07:45:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.001 07:45:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.001 07:45:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.001 07:45:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:59.001 07:45:00 json_config -- scripts/common.sh@345 -- # : 1 00:04:59.001 07:45:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.001 07:45:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.001 07:45:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:59.001 07:45:00 json_config -- scripts/common.sh@353 -- # local d=1 00:04:59.001 07:45:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.001 07:45:00 json_config -- scripts/common.sh@355 -- # echo 1 00:04:59.001 07:45:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.001 07:45:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@353 -- # local d=2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.001 07:45:00 json_config -- scripts/common.sh@355 -- # echo 2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.001 07:45:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.002 07:45:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.002 07:45:00 json_config -- scripts/common.sh@368 -- # return 0 00:04:59.002 07:45:00 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.002 07:45:00 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.002 --rc genhtml_branch_coverage=1 00:04:59.002 --rc genhtml_function_coverage=1 00:04:59.002 --rc genhtml_legend=1 00:04:59.002 --rc geninfo_all_blocks=1 00:04:59.002 --rc geninfo_unexecuted_blocks=1 00:04:59.002 00:04:59.002 ' 00:04:59.002 07:45:00 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.002 --rc genhtml_branch_coverage=1 00:04:59.002 --rc genhtml_function_coverage=1 00:04:59.002 --rc genhtml_legend=1 00:04:59.002 --rc geninfo_all_blocks=1 00:04:59.002 --rc geninfo_unexecuted_blocks=1 00:04:59.002 00:04:59.002 ' 00:04:59.002 07:45:00 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.002 --rc genhtml_branch_coverage=1 00:04:59.002 --rc genhtml_function_coverage=1 00:04:59.002 --rc genhtml_legend=1 00:04:59.002 --rc geninfo_all_blocks=1 00:04:59.002 --rc geninfo_unexecuted_blocks=1 00:04:59.002 00:04:59.002 ' 00:04:59.002 07:45:00 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.002 --rc genhtml_branch_coverage=1 00:04:59.002 --rc genhtml_function_coverage=1 00:04:59.002 --rc genhtml_legend=1 00:04:59.002 --rc geninfo_all_blocks=1 00:04:59.002 --rc geninfo_unexecuted_blocks=1 00:04:59.002 00:04:59.002 ' 00:04:59.002 07:45:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c6f65fa0-95db-4b4b-87bf-38c1f4b14e59 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c6f65fa0-95db-4b4b-87bf-38c1f4b14e59 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.002 07:45:00 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.002 07:45:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.002 07:45:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.002 07:45:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.002 07:45:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.002 07:45:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.002 07:45:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.002 07:45:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.002 07:45:01 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.002 07:45:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@51 -- # : 0 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.002 07:45:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.002 07:45:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:59.002 07:45:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.002 07:45:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.002 07:45:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.002 07:45:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.002 07:45:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:59.002 WARNING: No tests are enabled so not running JSON configuration tests 00:04:59.002 07:45:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:59.002 00:04:59.002 real 0m0.179s 00:04:59.002 user 0m0.122s 00:04:59.002 sys 0m0.057s 00:04:59.261 07:45:01 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.261 ************************************ 00:04:59.261 END TEST json_config 00:04:59.261 ************************************ 00:04:59.261 07:45:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.261 07:45:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.261 07:45:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.261 07:45:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.261 07:45:01 -- common/autotest_common.sh@10 -- # set +x 00:04:59.261 ************************************ 00:04:59.261 START TEST json_config_extra_key 00:04:59.261 ************************************ 00:04:59.261 07:45:01 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.261 07:45:01 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.261 07:45:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.261 07:45:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.261 07:45:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.261 07:45:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:59.262 07:45:01 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.262 07:45:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.262 --rc genhtml_branch_coverage=1 00:04:59.262 --rc genhtml_function_coverage=1 00:04:59.262 --rc genhtml_legend=1 00:04:59.262 --rc geninfo_all_blocks=1 00:04:59.262 --rc geninfo_unexecuted_blocks=1 00:04:59.262 00:04:59.262 ' 00:04:59.262 07:45:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.262 --rc genhtml_branch_coverage=1 00:04:59.262 --rc genhtml_function_coverage=1 00:04:59.262 --rc genhtml_legend=1 00:04:59.262 --rc geninfo_all_blocks=1 00:04:59.262 --rc geninfo_unexecuted_blocks=1 00:04:59.262 00:04:59.262 ' 00:04:59.262 07:45:01 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.262 --rc genhtml_branch_coverage=1 00:04:59.262 --rc genhtml_function_coverage=1 00:04:59.262 --rc genhtml_legend=1 00:04:59.262 --rc geninfo_all_blocks=1 00:04:59.262 --rc geninfo_unexecuted_blocks=1 00:04:59.262 00:04:59.262 ' 00:04:59.262 07:45:01 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.262 --rc genhtml_branch_coverage=1 00:04:59.262 --rc genhtml_function_coverage=1 00:04:59.262 --rc genhtml_legend=1 00:04:59.262 --rc geninfo_all_blocks=1 00:04:59.262 --rc geninfo_unexecuted_blocks=1 00:04:59.262 00:04:59.262 ' 00:04:59.262 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c6f65fa0-95db-4b4b-87bf-38c1f4b14e59 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c6f65fa0-95db-4b4b-87bf-38c1f4b14e59 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.262 07:45:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.262 07:45:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.262 07:45:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.262 07:45:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.262 07:45:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.262 07:45:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.262 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.262 07:45:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.263 INFO: launching applications... 00:04:59.263 07:45:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59024 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.263 Waiting for target to run... 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59024 /var/tmp/spdk_tgt.sock 00:04:59.263 07:45:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.263 07:45:01 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59024 ']' 00:04:59.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.263 07:45:01 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.263 07:45:01 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.263 07:45:01 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.263 07:45:01 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.263 07:45:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.522 [2024-10-09 07:45:01.413305] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:04:59.522 [2024-10-09 07:45:01.413725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59024 ] 00:04:59.780 [2024-10-09 07:45:01.753008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.038 [2024-10-09 07:45:01.967659] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.973 00:05:00.973 INFO: shutting down applications... 00:05:00.973 07:45:02 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.973 07:45:02 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:00.973 07:45:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.973 07:45:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59024 ]] 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59024 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59024 00:05:00.973 07:45:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.231 07:45:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.231 07:45:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.231 07:45:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59024 00:05:01.232 07:45:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.798 07:45:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.798 07:45:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.798 07:45:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59024 00:05:01.798 07:45:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.365 07:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.365 07:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.365 07:45:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59024 00:05:02.365 07:45:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.928 07:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.928 07:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.928 07:45:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59024 00:05:02.928 07:45:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.493 07:45:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.493 07:45:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.493 07:45:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59024 00:05:03.493 07:45:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.751 07:45:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.751 07:45:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.751 07:45:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59024 00:05:03.751 SPDK target shutdown done 00:05:03.751 Success 00:05:03.751 07:45:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.751 07:45:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:03.751 07:45:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.751 07:45:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.751 07:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:03.751 00:05:03.751 real 0m4.655s 00:05:03.751 user 0m4.055s 00:05:03.751 sys 0m0.510s 00:05:03.751 ************************************ 00:05:03.751 END TEST json_config_extra_key 00:05:03.751 ************************************ 00:05:03.751 07:45:05 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.751 07:45:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.751 07:45:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.751 07:45:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.751 07:45:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.751 07:45:05 -- common/autotest_common.sh@10 -- # set +x 00:05:04.016 ************************************ 00:05:04.016 START TEST alias_rpc 00:05:04.016 ************************************ 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:04.016 * Looking for test storage... 00:05:04.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.016 07:45:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 07:45:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:04.016 07:45:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59141 00:05:04.016 07:45:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59141 00:05:04.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.016 07:45:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59141 ']' 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.016 07:45:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.274 [2024-10-09 07:45:06.076422] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:04.274 [2024-10-09 07:45:06.076880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59141 ] 00:05:04.274 [2024-10-09 07:45:06.252465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.532 [2024-10-09 07:45:06.448893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.468 07:45:07 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.468 07:45:07 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:05.468 07:45:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:05.727 07:45:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59141 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59141 ']' 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59141 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59141 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.727 killing process with pid 59141 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59141' 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@969 -- # kill 59141 00:05:05.727 07:45:07 alias_rpc -- common/autotest_common.sh@974 -- # wait 59141 00:05:08.306 ************************************ 00:05:08.306 END TEST alias_rpc 00:05:08.306 ************************************ 00:05:08.306 00:05:08.306 real 0m4.124s 00:05:08.306 user 0m4.379s 00:05:08.306 sys 0m0.531s 00:05:08.306 07:45:09 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.306 07:45:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.306 07:45:09 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:08.306 07:45:09 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:08.306 07:45:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.306 07:45:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.306 07:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.306 ************************************ 00:05:08.306 START TEST spdkcli_tcp 00:05:08.306 ************************************ 00:05:08.306 07:45:09 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:08.306 * Looking for test storage... 00:05:08.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.306 07:45:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.306 --rc genhtml_branch_coverage=1 00:05:08.306 --rc genhtml_function_coverage=1 00:05:08.306 --rc genhtml_legend=1 00:05:08.306 --rc geninfo_all_blocks=1 00:05:08.306 --rc geninfo_unexecuted_blocks=1 00:05:08.306 00:05:08.306 ' 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.306 --rc genhtml_branch_coverage=1 00:05:08.306 --rc genhtml_function_coverage=1 00:05:08.306 --rc genhtml_legend=1 00:05:08.306 --rc geninfo_all_blocks=1 00:05:08.306 --rc geninfo_unexecuted_blocks=1 00:05:08.306 00:05:08.306 ' 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.306 --rc genhtml_branch_coverage=1 00:05:08.306 --rc genhtml_function_coverage=1 00:05:08.306 --rc genhtml_legend=1 00:05:08.306 --rc geninfo_all_blocks=1 00:05:08.306 --rc geninfo_unexecuted_blocks=1 00:05:08.306 00:05:08.306 ' 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.306 --rc genhtml_branch_coverage=1 00:05:08.306 --rc genhtml_function_coverage=1 00:05:08.306 --rc genhtml_legend=1 00:05:08.306 --rc geninfo_all_blocks=1 00:05:08.306 --rc geninfo_unexecuted_blocks=1 00:05:08.306 00:05:08.306 ' 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59248 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.306 07:45:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59248 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59248 ']' 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.306 07:45:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.573 [2024-10-09 07:45:10.339124] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:08.573 [2024-10-09 07:45:10.339508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59248 ] 00:05:08.573 [2024-10-09 07:45:10.502366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.831 [2024-10-09 07:45:10.696259] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.832 [2024-10-09 07:45:10.696273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.766 07:45:11 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.766 07:45:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:09.766 07:45:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59265 00:05:09.766 07:45:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.766 07:45:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.025 [ 00:05:10.025 "bdev_malloc_delete", 00:05:10.025 "bdev_malloc_create", 00:05:10.025 "bdev_null_resize", 00:05:10.025 "bdev_null_delete", 00:05:10.025 "bdev_null_create", 00:05:10.025 "bdev_nvme_cuse_unregister", 00:05:10.025 "bdev_nvme_cuse_register", 00:05:10.025 "bdev_opal_new_user", 00:05:10.025 "bdev_opal_set_lock_state", 00:05:10.025 "bdev_opal_delete", 00:05:10.025 "bdev_opal_get_info", 00:05:10.025 "bdev_opal_create", 00:05:10.025 "bdev_nvme_opal_revert", 00:05:10.025 "bdev_nvme_opal_init", 00:05:10.025 "bdev_nvme_send_cmd", 00:05:10.025 "bdev_nvme_set_keys", 00:05:10.025 "bdev_nvme_get_path_iostat", 00:05:10.025 "bdev_nvme_get_mdns_discovery_info", 00:05:10.025 "bdev_nvme_stop_mdns_discovery", 00:05:10.025 "bdev_nvme_start_mdns_discovery", 00:05:10.025 "bdev_nvme_set_multipath_policy", 00:05:10.025 "bdev_nvme_set_preferred_path", 00:05:10.025 "bdev_nvme_get_io_paths", 00:05:10.025 "bdev_nvme_remove_error_injection", 00:05:10.025 "bdev_nvme_add_error_injection", 00:05:10.025 "bdev_nvme_get_discovery_info", 00:05:10.025 "bdev_nvme_stop_discovery", 00:05:10.025 "bdev_nvme_start_discovery", 00:05:10.025 "bdev_nvme_get_controller_health_info", 00:05:10.025 "bdev_nvme_disable_controller", 00:05:10.025 "bdev_nvme_enable_controller", 00:05:10.025 "bdev_nvme_reset_controller", 00:05:10.025 "bdev_nvme_get_transport_statistics", 00:05:10.025 "bdev_nvme_apply_firmware", 00:05:10.025 "bdev_nvme_detach_controller", 00:05:10.025 "bdev_nvme_get_controllers", 00:05:10.025 "bdev_nvme_attach_controller", 00:05:10.026 "bdev_nvme_set_hotplug", 00:05:10.026 "bdev_nvme_set_options", 00:05:10.026 "bdev_passthru_delete", 00:05:10.026 "bdev_passthru_create", 00:05:10.026 "bdev_lvol_set_parent_bdev", 00:05:10.026 "bdev_lvol_set_parent", 00:05:10.026 "bdev_lvol_check_shallow_copy", 00:05:10.026 "bdev_lvol_start_shallow_copy", 00:05:10.026 "bdev_lvol_grow_lvstore", 00:05:10.026 "bdev_lvol_get_lvols", 00:05:10.026 "bdev_lvol_get_lvstores", 00:05:10.026 "bdev_lvol_delete", 00:05:10.026 "bdev_lvol_set_read_only", 00:05:10.026 "bdev_lvol_resize", 00:05:10.026 "bdev_lvol_decouple_parent", 00:05:10.026 "bdev_lvol_inflate", 00:05:10.026 "bdev_lvol_rename", 00:05:10.026 "bdev_lvol_clone_bdev", 00:05:10.026 "bdev_lvol_clone", 00:05:10.026 "bdev_lvol_snapshot", 00:05:10.026 "bdev_lvol_create", 00:05:10.026 "bdev_lvol_delete_lvstore", 00:05:10.026 "bdev_lvol_rename_lvstore", 00:05:10.026 "bdev_lvol_create_lvstore", 00:05:10.026 "bdev_raid_set_options", 00:05:10.026 "bdev_raid_remove_base_bdev", 00:05:10.026 "bdev_raid_add_base_bdev", 00:05:10.026 "bdev_raid_delete", 00:05:10.026 "bdev_raid_create", 00:05:10.026 "bdev_raid_get_bdevs", 00:05:10.026 "bdev_error_inject_error", 00:05:10.026 "bdev_error_delete", 00:05:10.026 "bdev_error_create", 00:05:10.026 "bdev_split_delete", 00:05:10.026 "bdev_split_create", 00:05:10.026 "bdev_delay_delete", 00:05:10.026 "bdev_delay_create", 00:05:10.026 "bdev_delay_update_latency", 00:05:10.026 "bdev_zone_block_delete", 00:05:10.026 "bdev_zone_block_create", 00:05:10.026 "blobfs_create", 00:05:10.026 "blobfs_detect", 00:05:10.026 "blobfs_set_cache_size", 00:05:10.026 "bdev_xnvme_delete", 00:05:10.026 "bdev_xnvme_create", 00:05:10.026 "bdev_aio_delete", 00:05:10.026 "bdev_aio_rescan", 00:05:10.026 "bdev_aio_create", 00:05:10.026 "bdev_ftl_set_property", 00:05:10.026 "bdev_ftl_get_properties", 00:05:10.026 "bdev_ftl_get_stats", 00:05:10.026 "bdev_ftl_unmap", 00:05:10.026 "bdev_ftl_unload", 00:05:10.026 "bdev_ftl_delete", 00:05:10.026 "bdev_ftl_load", 00:05:10.026 "bdev_ftl_create", 00:05:10.026 "bdev_virtio_attach_controller", 00:05:10.026 "bdev_virtio_scsi_get_devices", 00:05:10.026 "bdev_virtio_detach_controller", 00:05:10.026 "bdev_virtio_blk_set_hotplug", 00:05:10.026 "bdev_iscsi_delete", 00:05:10.026 "bdev_iscsi_create", 00:05:10.026 "bdev_iscsi_set_options", 00:05:10.026 "accel_error_inject_error", 00:05:10.026 "ioat_scan_accel_module", 00:05:10.026 "dsa_scan_accel_module", 00:05:10.026 "iaa_scan_accel_module", 00:05:10.026 "keyring_file_remove_key", 00:05:10.026 "keyring_file_add_key", 00:05:10.026 "keyring_linux_set_options", 00:05:10.026 "fsdev_aio_delete", 00:05:10.026 "fsdev_aio_create", 00:05:10.026 "iscsi_get_histogram", 00:05:10.026 "iscsi_enable_histogram", 00:05:10.026 "iscsi_set_options", 00:05:10.026 "iscsi_get_auth_groups", 00:05:10.026 "iscsi_auth_group_remove_secret", 00:05:10.026 "iscsi_auth_group_add_secret", 00:05:10.026 "iscsi_delete_auth_group", 00:05:10.026 "iscsi_create_auth_group", 00:05:10.026 "iscsi_set_discovery_auth", 00:05:10.026 "iscsi_get_options", 00:05:10.026 "iscsi_target_node_request_logout", 00:05:10.026 "iscsi_target_node_set_redirect", 00:05:10.026 "iscsi_target_node_set_auth", 00:05:10.026 "iscsi_target_node_add_lun", 00:05:10.026 "iscsi_get_stats", 00:05:10.026 "iscsi_get_connections", 00:05:10.026 "iscsi_portal_group_set_auth", 00:05:10.026 "iscsi_start_portal_group", 00:05:10.026 "iscsi_delete_portal_group", 00:05:10.026 "iscsi_create_portal_group", 00:05:10.026 "iscsi_get_portal_groups", 00:05:10.026 "iscsi_delete_target_node", 00:05:10.026 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.026 "iscsi_target_node_add_pg_ig_maps", 00:05:10.026 "iscsi_create_target_node", 00:05:10.026 "iscsi_get_target_nodes", 00:05:10.026 "iscsi_delete_initiator_group", 00:05:10.026 "iscsi_initiator_group_remove_initiators", 00:05:10.026 "iscsi_initiator_group_add_initiators", 00:05:10.026 "iscsi_create_initiator_group", 00:05:10.026 "iscsi_get_initiator_groups", 00:05:10.026 "nvmf_set_crdt", 00:05:10.026 "nvmf_set_config", 00:05:10.026 "nvmf_set_max_subsystems", 00:05:10.026 "nvmf_stop_mdns_prr", 00:05:10.026 "nvmf_publish_mdns_prr", 00:05:10.026 "nvmf_subsystem_get_listeners", 00:05:10.026 "nvmf_subsystem_get_qpairs", 00:05:10.026 "nvmf_subsystem_get_controllers", 00:05:10.026 "nvmf_get_stats", 00:05:10.026 "nvmf_get_transports", 00:05:10.026 "nvmf_create_transport", 00:05:10.026 "nvmf_get_targets", 00:05:10.026 "nvmf_delete_target", 00:05:10.026 "nvmf_create_target", 00:05:10.026 "nvmf_subsystem_allow_any_host", 00:05:10.026 "nvmf_subsystem_set_keys", 00:05:10.026 "nvmf_subsystem_remove_host", 00:05:10.026 "nvmf_subsystem_add_host", 00:05:10.026 "nvmf_ns_remove_host", 00:05:10.026 "nvmf_ns_add_host", 00:05:10.026 "nvmf_subsystem_remove_ns", 00:05:10.026 "nvmf_subsystem_set_ns_ana_group", 00:05:10.026 "nvmf_subsystem_add_ns", 00:05:10.026 "nvmf_subsystem_listener_set_ana_state", 00:05:10.026 "nvmf_discovery_get_referrals", 00:05:10.026 "nvmf_discovery_remove_referral", 00:05:10.026 "nvmf_discovery_add_referral", 00:05:10.026 "nvmf_subsystem_remove_listener", 00:05:10.026 "nvmf_subsystem_add_listener", 00:05:10.026 "nvmf_delete_subsystem", 00:05:10.026 "nvmf_create_subsystem", 00:05:10.026 "nvmf_get_subsystems", 00:05:10.026 "env_dpdk_get_mem_stats", 00:05:10.026 "nbd_get_disks", 00:05:10.026 "nbd_stop_disk", 00:05:10.026 "nbd_start_disk", 00:05:10.026 "ublk_recover_disk", 00:05:10.026 "ublk_get_disks", 00:05:10.026 "ublk_stop_disk", 00:05:10.026 "ublk_start_disk", 00:05:10.026 "ublk_destroy_target", 00:05:10.026 "ublk_create_target", 00:05:10.026 "virtio_blk_create_transport", 00:05:10.026 "virtio_blk_get_transports", 00:05:10.026 "vhost_controller_set_coalescing", 00:05:10.026 "vhost_get_controllers", 00:05:10.026 "vhost_delete_controller", 00:05:10.026 "vhost_create_blk_controller", 00:05:10.026 "vhost_scsi_controller_remove_target", 00:05:10.026 "vhost_scsi_controller_add_target", 00:05:10.026 "vhost_start_scsi_controller", 00:05:10.026 "vhost_create_scsi_controller", 00:05:10.026 "thread_set_cpumask", 00:05:10.026 "scheduler_set_options", 00:05:10.026 "framework_get_governor", 00:05:10.026 "framework_get_scheduler", 00:05:10.026 "framework_set_scheduler", 00:05:10.026 "framework_get_reactors", 00:05:10.026 "thread_get_io_channels", 00:05:10.026 "thread_get_pollers", 00:05:10.026 "thread_get_stats", 00:05:10.026 "framework_monitor_context_switch", 00:05:10.026 "spdk_kill_instance", 00:05:10.026 "log_enable_timestamps", 00:05:10.026 "log_get_flags", 00:05:10.026 "log_clear_flag", 00:05:10.026 "log_set_flag", 00:05:10.026 "log_get_level", 00:05:10.026 "log_set_level", 00:05:10.026 "log_get_print_level", 00:05:10.026 "log_set_print_level", 00:05:10.026 "framework_enable_cpumask_locks", 00:05:10.026 "framework_disable_cpumask_locks", 00:05:10.026 "framework_wait_init", 00:05:10.026 "framework_start_init", 00:05:10.026 "scsi_get_devices", 00:05:10.026 "bdev_get_histogram", 00:05:10.026 "bdev_enable_histogram", 00:05:10.026 "bdev_set_qos_limit", 00:05:10.026 "bdev_set_qd_sampling_period", 00:05:10.026 "bdev_get_bdevs", 00:05:10.026 "bdev_reset_iostat", 00:05:10.026 "bdev_get_iostat", 00:05:10.026 "bdev_examine", 00:05:10.026 "bdev_wait_for_examine", 00:05:10.026 "bdev_set_options", 00:05:10.026 "accel_get_stats", 00:05:10.026 "accel_set_options", 00:05:10.026 "accel_set_driver", 00:05:10.026 "accel_crypto_key_destroy", 00:05:10.026 "accel_crypto_keys_get", 00:05:10.026 "accel_crypto_key_create", 00:05:10.026 "accel_assign_opc", 00:05:10.026 "accel_get_module_info", 00:05:10.026 "accel_get_opc_assignments", 00:05:10.026 "vmd_rescan", 00:05:10.026 "vmd_remove_device", 00:05:10.026 "vmd_enable", 00:05:10.026 "sock_get_default_impl", 00:05:10.026 "sock_set_default_impl", 00:05:10.026 "sock_impl_set_options", 00:05:10.026 "sock_impl_get_options", 00:05:10.026 "iobuf_get_stats", 00:05:10.026 "iobuf_set_options", 00:05:10.026 "keyring_get_keys", 00:05:10.026 "framework_get_pci_devices", 00:05:10.026 "framework_get_config", 00:05:10.026 "framework_get_subsystems", 00:05:10.026 "fsdev_set_opts", 00:05:10.026 "fsdev_get_opts", 00:05:10.026 "trace_get_info", 00:05:10.026 "trace_get_tpoint_group_mask", 00:05:10.026 "trace_disable_tpoint_group", 00:05:10.026 "trace_enable_tpoint_group", 00:05:10.026 "trace_clear_tpoint_mask", 00:05:10.026 "trace_set_tpoint_mask", 00:05:10.026 "notify_get_notifications", 00:05:10.026 "notify_get_types", 00:05:10.026 "spdk_get_version", 00:05:10.026 "rpc_get_methods" 00:05:10.026 ] 00:05:10.026 07:45:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.026 07:45:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.026 07:45:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59248 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59248 ']' 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59248 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59248 00:05:10.026 killing process with pid 59248 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59248' 00:05:10.026 07:45:11 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59248 00:05:10.027 07:45:11 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59248 00:05:12.618 ************************************ 00:05:12.618 END TEST spdkcli_tcp 00:05:12.618 ************************************ 00:05:12.618 00:05:12.618 real 0m4.142s 00:05:12.618 user 0m7.377s 00:05:12.618 sys 0m0.559s 00:05:12.618 07:45:14 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.618 07:45:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.618 07:45:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.618 07:45:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.618 07:45:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.618 07:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.618 ************************************ 00:05:12.618 START TEST dpdk_mem_utility 00:05:12.618 ************************************ 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.618 * Looking for test storage... 00:05:12.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.618 07:45:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.618 --rc genhtml_branch_coverage=1 00:05:12.618 --rc genhtml_function_coverage=1 00:05:12.618 --rc genhtml_legend=1 00:05:12.618 --rc geninfo_all_blocks=1 00:05:12.618 --rc geninfo_unexecuted_blocks=1 00:05:12.618 00:05:12.618 ' 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.618 --rc genhtml_branch_coverage=1 00:05:12.618 --rc genhtml_function_coverage=1 00:05:12.618 --rc genhtml_legend=1 00:05:12.618 --rc geninfo_all_blocks=1 00:05:12.618 --rc geninfo_unexecuted_blocks=1 00:05:12.618 00:05:12.618 ' 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.618 --rc genhtml_branch_coverage=1 00:05:12.618 --rc genhtml_function_coverage=1 00:05:12.618 --rc genhtml_legend=1 00:05:12.618 --rc geninfo_all_blocks=1 00:05:12.618 --rc geninfo_unexecuted_blocks=1 00:05:12.618 00:05:12.618 ' 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.618 --rc genhtml_branch_coverage=1 00:05:12.618 --rc genhtml_function_coverage=1 00:05:12.618 --rc genhtml_legend=1 00:05:12.618 --rc geninfo_all_blocks=1 00:05:12.618 --rc geninfo_unexecuted_blocks=1 00:05:12.618 00:05:12.618 ' 00:05:12.618 07:45:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.618 07:45:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59370 00:05:12.618 07:45:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59370 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59370 ']' 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.618 07:45:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.618 07:45:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.618 [2024-10-09 07:45:14.447502] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:12.618 [2024-10-09 07:45:14.448142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59370 ] 00:05:12.618 [2024-10-09 07:45:14.617850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.877 [2024-10-09 07:45:14.802757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.814 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.814 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:13.814 07:45:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.814 07:45:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.814 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.814 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.814 { 00:05:13.814 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.814 } 00:05:13.814 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.814 07:45:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:13.814 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:13.814 1 heaps totaling size 866.000000 MiB 00:05:13.814 size: 866.000000 MiB heap id: 0 00:05:13.814 end heaps---------- 00:05:13.814 9 mempools totaling size 642.649841 MiB 00:05:13.814 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.814 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.814 size: 92.545471 MiB name: bdev_io_59370 00:05:13.814 size: 51.011292 MiB name: evtpool_59370 00:05:13.814 size: 50.003479 MiB name: msgpool_59370 00:05:13.814 size: 36.509338 MiB name: fsdev_io_59370 00:05:13.814 size: 21.763794 MiB name: PDU_Pool 00:05:13.814 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.814 size: 0.026123 MiB name: Session_Pool 00:05:13.814 end mempools------- 00:05:13.814 6 memzones totaling size 4.142822 MiB 00:05:13.814 size: 1.000366 MiB name: RG_ring_0_59370 00:05:13.814 size: 1.000366 MiB name: RG_ring_1_59370 00:05:13.814 size: 1.000366 MiB name: RG_ring_4_59370 00:05:13.814 size: 1.000366 MiB name: RG_ring_5_59370 00:05:13.814 size: 0.125366 MiB name: RG_ring_2_59370 00:05:13.814 size: 0.015991 MiB name: RG_ring_3_59370 00:05:13.814 end memzones------- 00:05:13.814 07:45:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.814 heap id: 0 total size: 866.000000 MiB number of busy elements: 312 number of free elements: 19 00:05:13.814 list of free elements. size: 19.914307 MiB 00:05:13.814 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:13.814 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:13.814 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:13.814 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:13.814 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:13.814 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:13.814 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:13.814 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:13.815 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:13.815 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:13.815 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:13.815 element at address: 0x200000200000 with size: 0.831909 MiB 00:05:13.815 element at address: 0x20001de00000 with size: 0.561951 MiB 00:05:13.815 element at address: 0x200003e00000 with size: 0.490173 MiB 00:05:13.815 element at address: 0x20001c000000 with size: 0.489197 MiB 00:05:13.815 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:13.815 element at address: 0x200015e00000 with size: 0.443481 MiB 00:05:13.815 element at address: 0x20002b200000 with size: 0.390442 MiB 00:05:13.815 element at address: 0x200003a00000 with size: 0.353088 MiB 00:05:13.815 list of standard malloc elements. size: 199.286987 MiB 00:05:13.815 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:13.815 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:13.815 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:13.815 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:13.815 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:13.815 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.815 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:13.815 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.815 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:13.815 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:13.815 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:13.815 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003aff800 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7d7c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003efef00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:13.815 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:13.815 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b264040 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:13.816 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:13.817 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:13.817 list of memzone associated elements. size: 646.798706 MiB 00:05:13.817 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:13.817 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.817 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:13.817 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.817 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:13.817 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59370_0 00:05:13.817 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:13.817 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59370_0 00:05:13.817 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:13.817 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59370_0 00:05:13.817 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:13.817 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59370_0 00:05:13.817 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:13.817 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.817 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:13.817 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.817 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:13.817 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59370 00:05:13.817 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:13.817 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59370 00:05:13.817 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.817 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59370 00:05:13.817 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:13.817 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.817 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:13.817 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.817 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:13.817 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.817 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:13.817 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.817 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:13.817 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59370 00:05:13.817 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:13.817 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59370 00:05:13.817 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:13.817 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59370 00:05:13.817 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:13.817 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59370 00:05:13.817 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:05:13.817 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59370 00:05:13.817 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:05:13.817 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59370 00:05:13.817 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:13.817 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.817 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:13.817 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.817 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:13.817 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.817 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:05:13.817 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59370 00:05:13.817 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:13.817 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.817 element at address: 0x20002b264140 with size: 0.023804 MiB 00:05:13.817 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.817 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:05:13.817 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59370 00:05:13.817 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:05:13.817 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.817 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:05:13.817 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59370 00:05:13.817 element at address: 0x200003aff900 with size: 0.000366 MiB 00:05:13.817 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59370 00:05:13.817 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:13.817 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59370 00:05:13.817 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:05:13.817 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.817 07:45:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.817 07:45:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59370 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59370 ']' 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59370 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59370 00:05:13.817 killing process with pid 59370 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59370' 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59370 00:05:13.817 07:45:15 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59370 00:05:16.359 00:05:16.359 real 0m3.787s 00:05:16.359 user 0m3.975s 00:05:16.359 sys 0m0.516s 00:05:16.359 07:45:17 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.359 ************************************ 00:05:16.359 END TEST dpdk_mem_utility 00:05:16.359 ************************************ 00:05:16.359 07:45:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.359 07:45:17 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:16.359 07:45:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.359 07:45:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.359 07:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.359 ************************************ 00:05:16.359 START TEST event 00:05:16.359 ************************************ 00:05:16.359 07:45:17 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:16.359 * Looking for test storage... 00:05:16.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.359 07:45:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.359 07:45:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.359 07:45:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.359 07:45:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.359 07:45:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.359 07:45:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.359 07:45:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.359 07:45:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.359 07:45:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.359 07:45:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.359 07:45:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.359 07:45:18 event -- scripts/common.sh@344 -- # case "$op" in 00:05:16.359 07:45:18 event -- scripts/common.sh@345 -- # : 1 00:05:16.359 07:45:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.359 07:45:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.359 07:45:18 event -- scripts/common.sh@365 -- # decimal 1 00:05:16.359 07:45:18 event -- scripts/common.sh@353 -- # local d=1 00:05:16.359 07:45:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.359 07:45:18 event -- scripts/common.sh@355 -- # echo 1 00:05:16.359 07:45:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.359 07:45:18 event -- scripts/common.sh@366 -- # decimal 2 00:05:16.359 07:45:18 event -- scripts/common.sh@353 -- # local d=2 00:05:16.359 07:45:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.359 07:45:18 event -- scripts/common.sh@355 -- # echo 2 00:05:16.359 07:45:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.359 07:45:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.359 07:45:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.359 07:45:18 event -- scripts/common.sh@368 -- # return 0 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.359 --rc genhtml_branch_coverage=1 00:05:16.359 --rc genhtml_function_coverage=1 00:05:16.359 --rc genhtml_legend=1 00:05:16.359 --rc geninfo_all_blocks=1 00:05:16.359 --rc geninfo_unexecuted_blocks=1 00:05:16.359 00:05:16.359 ' 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.359 --rc genhtml_branch_coverage=1 00:05:16.359 --rc genhtml_function_coverage=1 00:05:16.359 --rc genhtml_legend=1 00:05:16.359 --rc geninfo_all_blocks=1 00:05:16.359 --rc geninfo_unexecuted_blocks=1 00:05:16.359 00:05:16.359 ' 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.359 --rc genhtml_branch_coverage=1 00:05:16.359 --rc genhtml_function_coverage=1 00:05:16.359 --rc genhtml_legend=1 00:05:16.359 --rc geninfo_all_blocks=1 00:05:16.359 --rc geninfo_unexecuted_blocks=1 00:05:16.359 00:05:16.359 ' 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.359 --rc genhtml_branch_coverage=1 00:05:16.359 --rc genhtml_function_coverage=1 00:05:16.359 --rc genhtml_legend=1 00:05:16.359 --rc geninfo_all_blocks=1 00:05:16.359 --rc geninfo_unexecuted_blocks=1 00:05:16.359 00:05:16.359 ' 00:05:16.359 07:45:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:16.359 07:45:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:16.359 07:45:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:16.359 07:45:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.359 07:45:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.359 ************************************ 00:05:16.359 START TEST event_perf 00:05:16.359 ************************************ 00:05:16.359 07:45:18 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.360 Running I/O for 1 seconds...[2024-10-09 07:45:18.207517] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:16.360 [2024-10-09 07:45:18.207828] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59478 ] 00:05:16.618 [2024-10-09 07:45:18.381017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.619 [2024-10-09 07:45:18.569730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.619 [2024-10-09 07:45:18.569855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.619 [2024-10-09 07:45:18.569984] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.619 [2024-10-09 07:45:18.569995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.995 Running I/O for 1 seconds... 00:05:17.995 lcore 0: 190544 00:05:17.995 lcore 1: 190546 00:05:17.995 lcore 2: 190549 00:05:17.995 lcore 3: 190541 00:05:17.995 done. 00:05:17.995 00:05:17.995 real 0m1.803s 00:05:17.995 user 0m4.563s 00:05:17.995 ************************************ 00:05:17.995 END TEST event_perf 00:05:17.995 ************************************ 00:05:17.995 sys 0m0.115s 00:05:17.995 07:45:19 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.995 07:45:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.254 07:45:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.254 07:45:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:18.254 07:45:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.254 07:45:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.254 ************************************ 00:05:18.254 START TEST event_reactor 00:05:18.254 ************************************ 00:05:18.254 07:45:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.254 [2024-10-09 07:45:20.057681] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:18.254 [2024-10-09 07:45:20.057820] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59522 ] 00:05:18.254 [2024-10-09 07:45:20.223820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.512 [2024-10-09 07:45:20.424026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.936 test_start 00:05:19.936 oneshot 00:05:19.936 tick 100 00:05:19.936 tick 100 00:05:19.936 tick 250 00:05:19.936 tick 100 00:05:19.936 tick 100 00:05:19.936 tick 100 00:05:19.936 tick 250 00:05:19.936 tick 500 00:05:19.936 tick 100 00:05:19.936 tick 100 00:05:19.936 tick 250 00:05:19.936 tick 100 00:05:19.936 tick 100 00:05:19.936 test_end 00:05:19.936 ************************************ 00:05:19.936 END TEST event_reactor 00:05:19.936 00:05:19.936 real 0m1.779s 00:05:19.936 user 0m1.579s 00:05:19.936 sys 0m0.090s 00:05:19.936 07:45:21 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.936 07:45:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:19.936 ************************************ 00:05:19.936 07:45:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.936 07:45:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:19.936 07:45:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.936 07:45:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.936 ************************************ 00:05:19.936 START TEST event_reactor_perf 00:05:19.936 ************************************ 00:05:19.936 07:45:21 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.936 [2024-10-09 07:45:21.902296] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:19.936 [2024-10-09 07:45:21.902503] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59560 ] 00:05:20.194 [2024-10-09 07:45:22.082829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.452 [2024-10-09 07:45:22.311082] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.830 test_start 00:05:21.830 test_end 00:05:21.830 Performance: 287781 events per second 00:05:21.830 00:05:21.830 real 0m1.835s 00:05:21.830 user 0m1.613s 00:05:21.830 sys 0m0.111s 00:05:21.830 ************************************ 00:05:21.830 END TEST event_reactor_perf 00:05:21.830 ************************************ 00:05:21.830 07:45:23 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.830 07:45:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.830 07:45:23 event -- event/event.sh@49 -- # uname -s 00:05:21.830 07:45:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:21.830 07:45:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.830 07:45:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.830 07:45:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.830 07:45:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.830 ************************************ 00:05:21.830 START TEST event_scheduler 00:05:21.830 ************************************ 00:05:21.830 07:45:23 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.830 * Looking for test storage... 00:05:21.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:21.830 07:45:23 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.830 07:45:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.830 07:45:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.089 07:45:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.089 --rc genhtml_branch_coverage=1 00:05:22.089 --rc genhtml_function_coverage=1 00:05:22.089 --rc genhtml_legend=1 00:05:22.089 --rc geninfo_all_blocks=1 00:05:22.089 --rc geninfo_unexecuted_blocks=1 00:05:22.089 00:05:22.089 ' 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.089 --rc genhtml_branch_coverage=1 00:05:22.089 --rc genhtml_function_coverage=1 00:05:22.089 --rc genhtml_legend=1 00:05:22.089 --rc geninfo_all_blocks=1 00:05:22.089 --rc geninfo_unexecuted_blocks=1 00:05:22.089 00:05:22.089 ' 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.089 --rc genhtml_branch_coverage=1 00:05:22.089 --rc genhtml_function_coverage=1 00:05:22.089 --rc genhtml_legend=1 00:05:22.089 --rc geninfo_all_blocks=1 00:05:22.089 --rc geninfo_unexecuted_blocks=1 00:05:22.089 00:05:22.089 ' 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.089 --rc genhtml_branch_coverage=1 00:05:22.089 --rc genhtml_function_coverage=1 00:05:22.089 --rc genhtml_legend=1 00:05:22.089 --rc geninfo_all_blocks=1 00:05:22.089 --rc geninfo_unexecuted_blocks=1 00:05:22.089 00:05:22.089 ' 00:05:22.089 07:45:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.089 07:45:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59636 00:05:22.089 07:45:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.089 07:45:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.089 07:45:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59636 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59636 ']' 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.089 07:45:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.089 [2024-10-09 07:45:24.042818] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:22.089 [2024-10-09 07:45:24.043219] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:05:22.347 [2024-10-09 07:45:24.218733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.605 [2024-10-09 07:45:24.458190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.605 [2024-10-09 07:45:24.458403] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.605 [2024-10-09 07:45:24.458528] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.605 [2024-10-09 07:45:24.458539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.173 07:45:25 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.173 07:45:25 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:23.173 07:45:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.173 07:45:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.173 07:45:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.173 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.173 POWER: Cannot set governor of lcore 0 to performance 00:05:23.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.173 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.173 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.173 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:23.173 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:23.173 POWER: Unable to set Power Management Environment for lcore 0 00:05:23.173 [2024-10-09 07:45:25.020761] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:23.173 [2024-10-09 07:45:25.020785] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:23.173 [2024-10-09 07:45:25.020802] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.173 [2024-10-09 07:45:25.020824] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.173 [2024-10-09 07:45:25.020837] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.173 [2024-10-09 07:45:25.020849] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.173 07:45:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.173 07:45:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.173 07:45:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.173 07:45:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 [2024-10-09 07:45:25.293810] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.435 07:45:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.435 07:45:25 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.435 07:45:25 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 ************************************ 00:05:23.435 START TEST scheduler_create_thread 00:05:23.435 ************************************ 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 2 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 3 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 4 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 5 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 6 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 7 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 8 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 9 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.435 10 00:05:23.435 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.436 07:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 07:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.383 07:45:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:24.383 07:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.383 07:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.759 07:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.759 07:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:25.759 07:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:25.759 07:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.759 07:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.694 ************************************ 00:05:26.694 END TEST scheduler_create_thread 00:05:26.694 ************************************ 00:05:26.694 07:45:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.694 00:05:26.694 real 0m3.383s 00:05:26.694 user 0m0.019s 00:05:26.694 sys 0m0.007s 00:05:26.694 07:45:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.694 07:45:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 07:45:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.952 07:45:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59636 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59636 ']' 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59636 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59636 00:05:26.952 killing process with pid 59636 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:26.952 07:45:28 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59636' 00:05:26.953 07:45:28 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59636 00:05:26.953 07:45:28 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59636 00:05:27.212 [2024-10-09 07:45:29.068695] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.588 ************************************ 00:05:28.588 END TEST event_scheduler 00:05:28.588 ************************************ 00:05:28.588 00:05:28.588 real 0m6.457s 00:05:28.588 user 0m12.751s 00:05:28.588 sys 0m0.443s 00:05:28.588 07:45:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.588 07:45:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.588 07:45:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.588 07:45:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.588 07:45:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.588 07:45:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.588 07:45:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.588 ************************************ 00:05:28.588 START TEST app_repeat 00:05:28.588 ************************************ 00:05:28.588 07:45:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.588 Process app_repeat pid: 59753 00:05:28.588 spdk_app_start Round 0 00:05:28.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59753 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59753' 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.588 07:45:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59753 /var/tmp/spdk-nbd.sock 00:05:28.588 07:45:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59753 ']' 00:05:28.588 07:45:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.589 07:45:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.589 07:45:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.589 07:45:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.589 07:45:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.589 [2024-10-09 07:45:30.331748] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:28.589 [2024-10-09 07:45:30.331924] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59753 ] 00:05:28.589 [2024-10-09 07:45:30.500034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.847 [2024-10-09 07:45:30.683172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.847 [2024-10-09 07:45:30.683181] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.780 07:45:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.780 07:45:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:29.780 07:45:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.780 Malloc0 00:05:29.780 07:45:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.344 Malloc1 00:05:30.344 07:45:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.344 07:45:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.601 /dev/nbd0 00:05:30.601 07:45:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.601 07:45:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.601 07:45:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.602 07:45:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.602 1+0 records in 00:05:30.602 1+0 records out 00:05:30.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304136 s, 13.5 MB/s 00:05:30.602 07:45:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.602 07:45:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.602 07:45:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.602 07:45:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.602 07:45:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.602 07:45:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.602 07:45:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.602 07:45:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.878 /dev/nbd1 00:05:30.879 07:45:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.879 07:45:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.879 1+0 records in 00:05:30.879 1+0 records out 00:05:30.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415506 s, 9.9 MB/s 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.879 07:45:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.879 07:45:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.879 07:45:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.879 07:45:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.879 07:45:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.879 07:45:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.145 { 00:05:31.145 "nbd_device": "/dev/nbd0", 00:05:31.145 "bdev_name": "Malloc0" 00:05:31.145 }, 00:05:31.145 { 00:05:31.145 "nbd_device": "/dev/nbd1", 00:05:31.145 "bdev_name": "Malloc1" 00:05:31.145 } 00:05:31.145 ]' 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.145 { 00:05:31.145 "nbd_device": "/dev/nbd0", 00:05:31.145 "bdev_name": "Malloc0" 00:05:31.145 }, 00:05:31.145 { 00:05:31.145 "nbd_device": "/dev/nbd1", 00:05:31.145 "bdev_name": "Malloc1" 00:05:31.145 } 00:05:31.145 ]' 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.145 /dev/nbd1' 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.145 /dev/nbd1' 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.145 256+0 records in 00:05:31.145 256+0 records out 00:05:31.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0079555 s, 132 MB/s 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.145 07:45:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.403 256+0 records in 00:05:31.403 256+0 records out 00:05:31.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316218 s, 33.2 MB/s 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.403 256+0 records in 00:05:31.403 256+0 records out 00:05:31.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348567 s, 30.1 MB/s 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.403 07:45:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.920 07:45:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.179 07:45:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.179 07:45:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.746 07:45:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.121 [2024-10-09 07:45:35.703022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.121 [2024-10-09 07:45:35.886194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.121 [2024-10-09 07:45:35.886201] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.121 [2024-10-09 07:45:36.058004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.121 [2024-10-09 07:45:36.058341] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.023 spdk_app_start Round 1 00:05:36.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.023 07:45:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.023 07:45:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:36.023 07:45:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59753 /var/tmp/spdk-nbd.sock 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59753 ']' 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.023 07:45:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:36.023 07:45:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.281 Malloc0 00:05:36.281 07:45:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.539 Malloc1 00:05:36.539 07:45:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.539 07:45:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.797 /dev/nbd0 00:05:36.797 07:45:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.797 07:45:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.797 1+0 records in 00:05:36.797 1+0 records out 00:05:36.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341079 s, 12.0 MB/s 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.797 07:45:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.079 07:45:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.079 07:45:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.079 07:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.079 07:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.079 07:45:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.079 /dev/nbd1 00:05:37.337 07:45:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.337 07:45:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.337 1+0 records in 00:05:37.337 1+0 records out 00:05:37.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336011 s, 12.2 MB/s 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.337 07:45:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.337 07:45:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.337 07:45:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.337 07:45:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.337 07:45:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.337 07:45:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.596 { 00:05:37.596 "nbd_device": "/dev/nbd0", 00:05:37.596 "bdev_name": "Malloc0" 00:05:37.596 }, 00:05:37.596 { 00:05:37.596 "nbd_device": "/dev/nbd1", 00:05:37.596 "bdev_name": "Malloc1" 00:05:37.596 } 00:05:37.596 ]' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.596 { 00:05:37.596 "nbd_device": "/dev/nbd0", 00:05:37.596 "bdev_name": "Malloc0" 00:05:37.596 }, 00:05:37.596 { 00:05:37.596 "nbd_device": "/dev/nbd1", 00:05:37.596 "bdev_name": "Malloc1" 00:05:37.596 } 00:05:37.596 ]' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.596 /dev/nbd1' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.596 /dev/nbd1' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.596 256+0 records in 00:05:37.596 256+0 records out 00:05:37.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101429 s, 103 MB/s 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.596 256+0 records in 00:05:37.596 256+0 records out 00:05:37.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257696 s, 40.7 MB/s 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.596 256+0 records in 00:05:37.596 256+0 records out 00:05:37.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0360676 s, 29.1 MB/s 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.596 07:45:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.165 07:45:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.423 07:45:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.988 07:45:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.988 07:45:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.246 07:45:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.623 [2024-10-09 07:45:42.302316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.623 [2024-10-09 07:45:42.483340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.623 [2024-10-09 07:45:42.483360] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.881 [2024-10-09 07:45:42.655564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.881 [2024-10-09 07:45:42.655670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.257 spdk_app_start Round 2 00:05:42.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.257 07:45:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.257 07:45:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:42.257 07:45:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59753 /var/tmp/spdk-nbd.sock 00:05:42.257 07:45:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59753 ']' 00:05:42.257 07:45:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.257 07:45:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.257 07:45:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.257 07:45:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.257 07:45:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.838 07:45:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.838 07:45:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.838 07:45:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.107 Malloc0 00:05:43.108 07:45:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.365 Malloc1 00:05:43.365 07:45:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.365 07:45:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.623 /dev/nbd0 00:05:43.623 07:45:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.881 07:45:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.881 1+0 records in 00:05:43.881 1+0 records out 00:05:43.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518654 s, 7.9 MB/s 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.881 07:45:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.881 07:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.881 07:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.881 07:45:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.139 /dev/nbd1 00:05:44.139 07:45:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.139 07:45:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.139 1+0 records in 00:05:44.139 1+0 records out 00:05:44.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382836 s, 10.7 MB/s 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.139 07:45:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.139 07:45:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.139 07:45:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.139 07:45:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.139 07:45:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.139 07:45:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.396 07:45:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.396 { 00:05:44.396 "nbd_device": "/dev/nbd0", 00:05:44.396 "bdev_name": "Malloc0" 00:05:44.396 }, 00:05:44.396 { 00:05:44.396 "nbd_device": "/dev/nbd1", 00:05:44.396 "bdev_name": "Malloc1" 00:05:44.396 } 00:05:44.396 ]' 00:05:44.396 07:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.396 07:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.396 { 00:05:44.396 "nbd_device": "/dev/nbd0", 00:05:44.396 "bdev_name": "Malloc0" 00:05:44.396 }, 00:05:44.396 { 00:05:44.396 "nbd_device": "/dev/nbd1", 00:05:44.396 "bdev_name": "Malloc1" 00:05:44.396 } 00:05:44.396 ]' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.654 /dev/nbd1' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.654 /dev/nbd1' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.654 256+0 records in 00:05:44.654 256+0 records out 00:05:44.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00903704 s, 116 MB/s 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.654 256+0 records in 00:05:44.654 256+0 records out 00:05:44.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312215 s, 33.6 MB/s 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.654 256+0 records in 00:05:44.654 256+0 records out 00:05:44.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0393008 s, 26.7 MB/s 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.654 07:45:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.913 07:45:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.479 07:45:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.737 07:45:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.737 07:45:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.303 07:45:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.681 [2024-10-09 07:45:49.335918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.681 [2024-10-09 07:45:49.542492] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.681 [2024-10-09 07:45:49.542503] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.939 [2024-10-09 07:45:49.729483] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.939 [2024-10-09 07:45:49.729592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.316 07:45:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59753 /var/tmp/spdk-nbd.sock 00:05:49.316 07:45:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59753 ']' 00:05:49.316 07:45:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.316 07:45:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.316 07:45:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.316 07:45:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.316 07:45:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.574 07:45:51 event.app_repeat -- event/event.sh@39 -- # killprocess 59753 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59753 ']' 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59753 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59753 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.574 killing process with pid 59753 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59753' 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59753 00:05:49.574 07:45:51 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59753 00:05:50.948 spdk_app_start is called in Round 0. 00:05:50.948 Shutdown signal received, stop current app iteration 00:05:50.948 Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 reinitialization... 00:05:50.948 spdk_app_start is called in Round 1. 00:05:50.948 Shutdown signal received, stop current app iteration 00:05:50.948 Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 reinitialization... 00:05:50.948 spdk_app_start is called in Round 2. 00:05:50.948 Shutdown signal received, stop current app iteration 00:05:50.948 Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 reinitialization... 00:05:50.948 spdk_app_start is called in Round 3. 00:05:50.948 Shutdown signal received, stop current app iteration 00:05:50.948 ************************************ 00:05:50.948 END TEST app_repeat 00:05:50.948 ************************************ 00:05:50.948 07:45:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.948 07:45:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.948 00:05:50.948 real 0m22.290s 00:05:50.948 user 0m49.051s 00:05:50.949 sys 0m2.927s 00:05:50.949 07:45:52 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.949 07:45:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.949 07:45:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.949 07:45:52 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:50.949 07:45:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.949 07:45:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.949 07:45:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.949 ************************************ 00:05:50.949 START TEST cpu_locks 00:05:50.949 ************************************ 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:50.949 * Looking for test storage... 00:05:50.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.949 07:45:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.949 --rc genhtml_branch_coverage=1 00:05:50.949 --rc genhtml_function_coverage=1 00:05:50.949 --rc genhtml_legend=1 00:05:50.949 --rc geninfo_all_blocks=1 00:05:50.949 --rc geninfo_unexecuted_blocks=1 00:05:50.949 00:05:50.949 ' 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.949 --rc genhtml_branch_coverage=1 00:05:50.949 --rc genhtml_function_coverage=1 00:05:50.949 --rc genhtml_legend=1 00:05:50.949 --rc geninfo_all_blocks=1 00:05:50.949 --rc geninfo_unexecuted_blocks=1 00:05:50.949 00:05:50.949 ' 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.949 --rc genhtml_branch_coverage=1 00:05:50.949 --rc genhtml_function_coverage=1 00:05:50.949 --rc genhtml_legend=1 00:05:50.949 --rc geninfo_all_blocks=1 00:05:50.949 --rc geninfo_unexecuted_blocks=1 00:05:50.949 00:05:50.949 ' 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.949 --rc genhtml_branch_coverage=1 00:05:50.949 --rc genhtml_function_coverage=1 00:05:50.949 --rc genhtml_legend=1 00:05:50.949 --rc geninfo_all_blocks=1 00:05:50.949 --rc geninfo_unexecuted_blocks=1 00:05:50.949 00:05:50.949 ' 00:05:50.949 07:45:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:50.949 07:45:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:50.949 07:45:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:50.949 07:45:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.949 07:45:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.949 ************************************ 00:05:50.949 START TEST default_locks 00:05:50.949 ************************************ 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60235 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60235 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60235 ']' 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.949 07:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.949 [2024-10-09 07:45:52.928387] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:50.949 [2024-10-09 07:45:52.929073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60235 ] 00:05:51.207 [2024-10-09 07:45:53.103155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.464 [2024-10-09 07:45:53.340422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.401 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.401 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:52.401 07:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60235 00:05:52.401 07:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60235 00:05:52.401 07:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.659 07:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60235 00:05:52.659 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60235 ']' 00:05:52.659 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60235 00:05:52.659 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:52.659 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.659 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60235 00:05:52.917 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.917 killing process with pid 60235 00:05:52.917 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.917 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60235' 00:05:52.917 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60235 00:05:52.917 07:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60235 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60235 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60235 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60235 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60235 ']' 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.485 ERROR: process (pid: 60235) is no longer running 00:05:55.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60235) - No such process 00:05:55.485 ************************************ 00:05:55.485 END TEST default_locks 00:05:55.485 ************************************ 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.485 00:05:55.485 real 0m4.162s 00:05:55.485 user 0m4.281s 00:05:55.485 sys 0m0.692s 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.485 07:45:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.485 07:45:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.485 07:45:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.485 07:45:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.485 07:45:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.485 ************************************ 00:05:55.485 START TEST default_locks_via_rpc 00:05:55.485 ************************************ 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60310 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60310 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60310 ']' 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.485 07:45:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.485 [2024-10-09 07:45:57.147946] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:55.485 [2024-10-09 07:45:57.148148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:05:55.485 [2024-10-09 07:45:57.326489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.743 [2024-10-09 07:45:57.569892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60310 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.678 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60310 00:05:56.936 07:45:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60310 00:05:56.936 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60310 ']' 00:05:56.936 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60310 00:05:56.936 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:56.936 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.936 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60310 00:05:57.195 killing process with pid 60310 00:05:57.195 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.195 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.195 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60310' 00:05:57.195 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60310 00:05:57.195 07:45:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60310 00:05:59.726 ************************************ 00:05:59.726 END TEST default_locks_via_rpc 00:05:59.726 ************************************ 00:05:59.726 00:05:59.726 real 0m4.339s 00:05:59.726 user 0m4.535s 00:05:59.726 sys 0m0.674s 00:05:59.726 07:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.726 07:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.726 07:46:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.726 07:46:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.726 07:46:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.726 07:46:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.726 ************************************ 00:05:59.726 START TEST non_locking_app_on_locked_coremask 00:05:59.726 ************************************ 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60386 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60386 /var/tmp/spdk.sock 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60386 ']' 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.726 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.726 [2024-10-09 07:46:01.540691] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:05:59.726 [2024-10-09 07:46:01.541098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60386 ] 00:05:59.726 [2024-10-09 07:46:01.716999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.985 [2024-10-09 07:46:01.946899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60408 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60408 /var/tmp/spdk2.sock 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60408 ']' 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.920 07:46:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.178 [2024-10-09 07:46:02.944768] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:01.178 [2024-10-09 07:46:02.945169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60408 ] 00:06:01.178 [2024-10-09 07:46:03.137464] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.178 [2024-10-09 07:46:03.137539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.746 [2024-10-09 07:46:03.543343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.278 07:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.278 07:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.278 07:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60386 00:06:04.278 07:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.278 07:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60386 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60386 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60386 ']' 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60386 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60386 00:06:04.843 killing process with pid 60386 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60386' 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60386 00:06:04.843 07:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60386 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60408 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60408 ']' 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60408 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60408 00:06:10.111 killing process with pid 60408 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60408' 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60408 00:06:10.111 07:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60408 00:06:12.013 00:06:12.014 real 0m12.311s 00:06:12.014 user 0m13.149s 00:06:12.014 sys 0m1.387s 00:06:12.014 07:46:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.014 07:46:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.014 ************************************ 00:06:12.014 END TEST non_locking_app_on_locked_coremask 00:06:12.014 ************************************ 00:06:12.014 07:46:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.014 07:46:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.014 07:46:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.014 07:46:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.014 ************************************ 00:06:12.014 START TEST locking_app_on_unlocked_coremask 00:06:12.014 ************************************ 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60562 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60562 /var/tmp/spdk.sock 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60562 ']' 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.014 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.014 [2024-10-09 07:46:13.909031] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:12.014 [2024-10-09 07:46:13.909224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:06:12.273 [2024-10-09 07:46:14.087832] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.273 [2024-10-09 07:46:14.087926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.531 [2024-10-09 07:46:14.346897] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60584 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60584 /var/tmp/spdk2.sock 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60584 ']' 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.472 07:46:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.472 [2024-10-09 07:46:15.311012] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:13.472 [2024-10-09 07:46:15.311812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:06:13.731 [2024-10-09 07:46:15.501204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.990 [2024-10-09 07:46:15.894192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.521 07:46:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.521 07:46:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.521 07:46:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60584 00:06:16.521 07:46:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60584 00:06:16.521 07:46:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60562 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60562 ']' 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60562 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60562 00:06:17.455 killing process with pid 60562 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60562' 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60562 00:06:17.455 07:46:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60562 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60584 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60584 ']' 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60584 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60584 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.807 killing process with pid 60584 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60584' 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60584 00:06:22.807 07:46:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60584 00:06:24.181 00:06:24.181 real 0m12.323s 00:06:24.181 user 0m13.238s 00:06:24.181 sys 0m1.436s 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.181 ************************************ 00:06:24.181 END TEST locking_app_on_unlocked_coremask 00:06:24.181 ************************************ 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.181 07:46:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:24.181 07:46:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.181 07:46:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.181 07:46:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.181 ************************************ 00:06:24.181 START TEST locking_app_on_locked_coremask 00:06:24.181 ************************************ 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60743 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60743 /var/tmp/spdk.sock 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60743 ']' 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.181 07:46:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.439 [2024-10-09 07:46:26.269702] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:24.439 [2024-10-09 07:46:26.270107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:06:24.439 [2024-10-09 07:46:26.446527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.698 [2024-10-09 07:46:26.691900] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60759 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60759 /var/tmp/spdk2.sock 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60759 /var/tmp/spdk2.sock 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60759 /var/tmp/spdk2.sock 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60759 ']' 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.632 07:46:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.891 [2024-10-09 07:46:27.654575] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:25.891 [2024-10-09 07:46:27.654982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60759 ] 00:06:25.891 [2024-10-09 07:46:27.840524] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60743 has claimed it. 00:06:25.891 [2024-10-09 07:46:27.840621] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.460 ERROR: process (pid: 60759) is no longer running 00:06:26.460 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60759) - No such process 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60743 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60743 00:06:26.460 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60743 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60743 ']' 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60743 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60743 00:06:27.025 killing process with pid 60743 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60743' 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60743 00:06:27.025 07:46:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60743 00:06:29.555 00:06:29.555 real 0m4.954s 00:06:29.555 user 0m5.546s 00:06:29.555 sys 0m0.812s 00:06:29.555 07:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.555 07:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.555 ************************************ 00:06:29.555 END TEST locking_app_on_locked_coremask 00:06:29.555 ************************************ 00:06:29.555 07:46:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.555 07:46:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.555 07:46:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.555 07:46:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.555 ************************************ 00:06:29.555 START TEST locking_overlapped_coremask 00:06:29.555 ************************************ 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:29.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60823 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60823 /var/tmp/spdk.sock 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60823 ']' 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.555 07:46:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.555 [2024-10-09 07:46:31.285724] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:29.555 [2024-10-09 07:46:31.286105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60823 ] 00:06:29.555 [2024-10-09 07:46:31.460658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.812 [2024-10-09 07:46:31.692808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.812 [2024-10-09 07:46:31.692960] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.812 [2024-10-09 07:46:31.692986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60852 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60852 /var/tmp/spdk2.sock 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60852 /var/tmp/spdk2.sock 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60852 /var/tmp/spdk2.sock 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60852 ']' 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.751 07:46:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.751 [2024-10-09 07:46:32.603076] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:30.751 [2024-10-09 07:46:32.603256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60852 ] 00:06:31.043 [2024-10-09 07:46:32.788238] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60823 has claimed it. 00:06:31.043 [2024-10-09 07:46:32.788361] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.300 ERROR: process (pid: 60852) is no longer running 00:06:31.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60852) - No such process 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60823 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60823 ']' 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60823 00:06:31.300 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.558 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.558 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60823 00:06:31.558 killing process with pid 60823 00:06:31.558 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.558 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.558 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60823' 00:06:31.558 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60823 00:06:31.558 07:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60823 00:06:34.088 00:06:34.088 real 0m4.498s 00:06:34.088 user 0m11.909s 00:06:34.088 sys 0m0.594s 00:06:34.088 07:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.088 ************************************ 00:06:34.088 END TEST locking_overlapped_coremask 00:06:34.088 ************************************ 00:06:34.088 07:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.088 07:46:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:34.088 07:46:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.088 07:46:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.088 07:46:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.088 ************************************ 00:06:34.088 START TEST locking_overlapped_coremask_via_rpc 00:06:34.088 ************************************ 00:06:34.088 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:34.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.088 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60918 00:06:34.088 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60918 /var/tmp/spdk.sock 00:06:34.088 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60918 ']' 00:06:34.088 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.089 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:34.089 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.089 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.089 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.089 07:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.089 [2024-10-09 07:46:35.811432] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:34.089 [2024-10-09 07:46:35.811804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60918 ] 00:06:34.089 [2024-10-09 07:46:35.979705] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.089 [2024-10-09 07:46:35.979768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.346 [2024-10-09 07:46:36.199504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.346 [2024-10-09 07:46:36.199701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.346 [2024-10-09 07:46:36.199705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60936 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60936 /var/tmp/spdk2.sock 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60936 ']' 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.280 07:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.280 [2024-10-09 07:46:37.129692] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:35.280 [2024-10-09 07:46:37.130358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60936 ] 00:06:35.539 [2024-10-09 07:46:37.310968] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.539 [2024-10-09 07:46:37.311048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.798 [2024-10-09 07:46:37.777525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.798 [2024-10-09 07:46:37.777615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.798 [2024-10-09 07:46:37.777617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.347 [2024-10-09 07:46:40.100575] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60918 has claimed it. 00:06:38.347 request: 00:06:38.347 { 00:06:38.347 "method": "framework_enable_cpumask_locks", 00:06:38.347 "req_id": 1 00:06:38.347 } 00:06:38.347 Got JSON-RPC error response 00:06:38.347 response: 00:06:38.347 { 00:06:38.347 "code": -32603, 00:06:38.347 "message": "Failed to claim CPU core: 2" 00:06:38.347 } 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:38.347 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60918 /var/tmp/spdk.sock 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60918 ']' 00:06:38.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.348 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60936 /var/tmp/spdk2.sock 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60936 ']' 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.605 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.862 ************************************ 00:06:38.862 END TEST locking_overlapped_coremask_via_rpc 00:06:38.862 ************************************ 00:06:38.862 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.863 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.863 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:38.863 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.863 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.863 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.863 00:06:38.863 real 0m5.052s 00:06:38.863 user 0m2.052s 00:06:38.863 sys 0m0.256s 00:06:38.863 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.863 07:46:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.863 07:46:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:38.863 07:46:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60918 ]] 00:06:38.863 07:46:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60918 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60918 ']' 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60918 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60918 00:06:38.863 killing process with pid 60918 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60918' 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60918 00:06:38.863 07:46:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60918 00:06:41.397 07:46:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60936 ]] 00:06:41.397 07:46:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60936 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60936 ']' 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60936 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60936 00:06:41.397 killing process with pid 60936 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60936' 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60936 00:06:41.397 07:46:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60936 00:06:43.607 07:46:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.607 Process with pid 60918 is not found 00:06:43.607 Process with pid 60936 is not found 00:06:43.607 07:46:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.607 07:46:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60918 ]] 00:06:43.607 07:46:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60918 00:06:43.607 07:46:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60918 ']' 00:06:43.607 07:46:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60918 00:06:43.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60918) - No such process 00:06:43.607 07:46:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60918 is not found' 00:06:43.607 07:46:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60936 ]] 00:06:43.608 07:46:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60936 00:06:43.608 07:46:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60936 ']' 00:06:43.608 07:46:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60936 00:06:43.608 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60936) - No such process 00:06:43.608 07:46:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60936 is not found' 00:06:43.608 07:46:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.608 ************************************ 00:06:43.608 END TEST cpu_locks 00:06:43.608 ************************************ 00:06:43.608 00:06:43.608 real 0m52.760s 00:06:43.608 user 1m31.325s 00:06:43.608 sys 0m6.813s 00:06:43.608 07:46:45 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.608 07:46:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.608 ************************************ 00:06:43.608 END TEST event 00:06:43.608 ************************************ 00:06:43.608 00:06:43.608 real 1m27.440s 00:06:43.608 user 2m41.070s 00:06:43.608 sys 0m10.791s 00:06:43.608 07:46:45 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.608 07:46:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.608 07:46:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.608 07:46:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.608 07:46:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.608 07:46:45 -- common/autotest_common.sh@10 -- # set +x 00:06:43.608 ************************************ 00:06:43.608 START TEST thread 00:06:43.608 ************************************ 00:06:43.608 07:46:45 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.608 * Looking for test storage... 00:06:43.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:43.608 07:46:45 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.608 07:46:45 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.608 07:46:45 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.867 07:46:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.867 07:46:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.867 07:46:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.867 07:46:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.867 07:46:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.867 07:46:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.867 07:46:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.867 07:46:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.867 07:46:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.867 07:46:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.867 07:46:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.867 07:46:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:43.867 07:46:45 thread -- scripts/common.sh@345 -- # : 1 00:06:43.867 07:46:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.867 07:46:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.867 07:46:45 thread -- scripts/common.sh@365 -- # decimal 1 00:06:43.867 07:46:45 thread -- scripts/common.sh@353 -- # local d=1 00:06:43.867 07:46:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.867 07:46:45 thread -- scripts/common.sh@355 -- # echo 1 00:06:43.867 07:46:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.867 07:46:45 thread -- scripts/common.sh@366 -- # decimal 2 00:06:43.867 07:46:45 thread -- scripts/common.sh@353 -- # local d=2 00:06:43.867 07:46:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.867 07:46:45 thread -- scripts/common.sh@355 -- # echo 2 00:06:43.867 07:46:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.867 07:46:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.867 07:46:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.867 07:46:45 thread -- scripts/common.sh@368 -- # return 0 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.867 --rc genhtml_branch_coverage=1 00:06:43.867 --rc genhtml_function_coverage=1 00:06:43.867 --rc genhtml_legend=1 00:06:43.867 --rc geninfo_all_blocks=1 00:06:43.867 --rc geninfo_unexecuted_blocks=1 00:06:43.867 00:06:43.867 ' 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.867 --rc genhtml_branch_coverage=1 00:06:43.867 --rc genhtml_function_coverage=1 00:06:43.867 --rc genhtml_legend=1 00:06:43.867 --rc geninfo_all_blocks=1 00:06:43.867 --rc geninfo_unexecuted_blocks=1 00:06:43.867 00:06:43.867 ' 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.867 --rc genhtml_branch_coverage=1 00:06:43.867 --rc genhtml_function_coverage=1 00:06:43.867 --rc genhtml_legend=1 00:06:43.867 --rc geninfo_all_blocks=1 00:06:43.867 --rc geninfo_unexecuted_blocks=1 00:06:43.867 00:06:43.867 ' 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.867 --rc genhtml_branch_coverage=1 00:06:43.867 --rc genhtml_function_coverage=1 00:06:43.867 --rc genhtml_legend=1 00:06:43.867 --rc geninfo_all_blocks=1 00:06:43.867 --rc geninfo_unexecuted_blocks=1 00:06:43.867 00:06:43.867 ' 00:06:43.867 07:46:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.867 07:46:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.867 ************************************ 00:06:43.867 START TEST thread_poller_perf 00:06:43.867 ************************************ 00:06:43.867 07:46:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.867 [2024-10-09 07:46:45.693862] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:43.867 [2024-10-09 07:46:45.694243] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:06:44.127 [2024-10-09 07:46:45.894129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.127 [2024-10-09 07:46:46.092414] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.127 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:45.505 [2024-10-09T07:46:47.517Z] ====================================== 00:06:45.505 [2024-10-09T07:46:47.517Z] busy:2215172092 (cyc) 00:06:45.505 [2024-10-09T07:46:47.517Z] total_run_count: 293000 00:06:45.505 [2024-10-09T07:46:47.517Z] tsc_hz: 2200000000 (cyc) 00:06:45.505 [2024-10-09T07:46:47.517Z] ====================================== 00:06:45.505 [2024-10-09T07:46:47.517Z] poller_cost: 7560 (cyc), 3436 (nsec) 00:06:45.505 00:06:45.505 real 0m1.847s 00:06:45.505 user 0m1.628s 00:06:45.505 sys 0m0.107s 00:06:45.505 ************************************ 00:06:45.505 END TEST thread_poller_perf 00:06:45.505 ************************************ 00:06:45.505 07:46:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.505 07:46:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.763 07:46:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.763 07:46:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:45.764 07:46:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.764 07:46:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.764 ************************************ 00:06:45.764 START TEST thread_poller_perf 00:06:45.764 ************************************ 00:06:45.764 07:46:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.764 [2024-10-09 07:46:47.597684] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:45.764 [2024-10-09 07:46:47.597883] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61173 ] 00:06:45.764 [2024-10-09 07:46:47.768052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.022 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.022 [2024-10-09 07:46:48.000059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.399 [2024-10-09T07:46:49.411Z] ====================================== 00:06:47.399 [2024-10-09T07:46:49.411Z] busy:2204838782 (cyc) 00:06:47.399 [2024-10-09T07:46:49.411Z] total_run_count: 3584000 00:06:47.399 [2024-10-09T07:46:49.411Z] tsc_hz: 2200000000 (cyc) 00:06:47.399 [2024-10-09T07:46:49.411Z] ====================================== 00:06:47.399 [2024-10-09T07:46:49.411Z] poller_cost: 615 (cyc), 279 (nsec) 00:06:47.399 ************************************ 00:06:47.399 END TEST thread_poller_perf 00:06:47.399 ************************************ 00:06:47.399 00:06:47.399 real 0m1.828s 00:06:47.399 user 0m1.609s 00:06:47.399 sys 0m0.106s 00:06:47.399 07:46:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.399 07:46:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.664 07:46:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.664 ************************************ 00:06:47.664 END TEST thread 00:06:47.664 ************************************ 00:06:47.664 00:06:47.664 real 0m3.961s 00:06:47.664 user 0m3.385s 00:06:47.664 sys 0m0.346s 00:06:47.664 07:46:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.664 07:46:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.664 07:46:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:47.664 07:46:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.664 07:46:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.664 07:46:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.664 07:46:49 -- common/autotest_common.sh@10 -- # set +x 00:06:47.664 ************************************ 00:06:47.664 START TEST app_cmdline 00:06:47.664 ************************************ 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.664 * Looking for test storage... 00:06:47.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.664 07:46:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.664 --rc genhtml_branch_coverage=1 00:06:47.664 --rc genhtml_function_coverage=1 00:06:47.664 --rc genhtml_legend=1 00:06:47.664 --rc geninfo_all_blocks=1 00:06:47.664 --rc geninfo_unexecuted_blocks=1 00:06:47.664 00:06:47.664 ' 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.664 --rc genhtml_branch_coverage=1 00:06:47.664 --rc genhtml_function_coverage=1 00:06:47.664 --rc genhtml_legend=1 00:06:47.664 --rc geninfo_all_blocks=1 00:06:47.664 --rc geninfo_unexecuted_blocks=1 00:06:47.664 00:06:47.664 ' 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.664 --rc genhtml_branch_coverage=1 00:06:47.664 --rc genhtml_function_coverage=1 00:06:47.664 --rc genhtml_legend=1 00:06:47.664 --rc geninfo_all_blocks=1 00:06:47.664 --rc geninfo_unexecuted_blocks=1 00:06:47.664 00:06:47.664 ' 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.664 --rc genhtml_branch_coverage=1 00:06:47.664 --rc genhtml_function_coverage=1 00:06:47.664 --rc genhtml_legend=1 00:06:47.664 --rc geninfo_all_blocks=1 00:06:47.664 --rc geninfo_unexecuted_blocks=1 00:06:47.664 00:06:47.664 ' 00:06:47.664 07:46:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.664 07:46:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61267 00:06:47.664 07:46:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.664 07:46:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61267 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61267 ']' 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.664 07:46:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.964 [2024-10-09 07:46:49.756945] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:47.964 [2024-10-09 07:46:49.757312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61267 ] 00:06:47.964 [2024-10-09 07:46:49.920194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.222 [2024-10-09 07:46:50.193178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.156 07:46:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.156 07:46:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:49.156 07:46:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:49.413 { 00:06:49.413 "version": "SPDK v25.01-pre git sha1 1c2942c86", 00:06:49.413 "fields": { 00:06:49.413 "major": 25, 00:06:49.413 "minor": 1, 00:06:49.413 "patch": 0, 00:06:49.413 "suffix": "-pre", 00:06:49.413 "commit": "1c2942c86" 00:06:49.413 } 00:06:49.413 } 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:49.413 07:46:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:49.413 07:46:51 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.671 request: 00:06:49.671 { 00:06:49.671 "method": "env_dpdk_get_mem_stats", 00:06:49.671 "req_id": 1 00:06:49.671 } 00:06:49.671 Got JSON-RPC error response 00:06:49.671 response: 00:06:49.671 { 00:06:49.671 "code": -32601, 00:06:49.671 "message": "Method not found" 00:06:49.671 } 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.671 07:46:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61267 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61267 ']' 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61267 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61267 00:06:49.671 killing process with pid 61267 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61267' 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 61267 00:06:49.671 07:46:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 61267 00:06:52.203 00:06:52.203 real 0m4.467s 00:06:52.203 user 0m5.062s 00:06:52.203 sys 0m0.545s 00:06:52.203 07:46:53 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.203 ************************************ 00:06:52.203 END TEST app_cmdline 00:06:52.203 ************************************ 00:06:52.203 07:46:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.203 07:46:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:52.203 07:46:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.203 07:46:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.203 07:46:53 -- common/autotest_common.sh@10 -- # set +x 00:06:52.203 ************************************ 00:06:52.203 START TEST version 00:06:52.203 ************************************ 00:06:52.203 07:46:53 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:52.203 * Looking for test storage... 00:06:52.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:52.203 07:46:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.203 07:46:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.203 07:46:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.203 07:46:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.203 07:46:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.203 07:46:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.203 07:46:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.203 07:46:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.203 07:46:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.203 07:46:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.203 07:46:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.203 07:46:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:52.203 07:46:54 version -- scripts/common.sh@345 -- # : 1 00:06:52.203 07:46:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.203 07:46:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.203 07:46:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:52.203 07:46:54 version -- scripts/common.sh@353 -- # local d=1 00:06:52.203 07:46:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.203 07:46:54 version -- scripts/common.sh@355 -- # echo 1 00:06:52.203 07:46:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.203 07:46:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:52.203 07:46:54 version -- scripts/common.sh@353 -- # local d=2 00:06:52.203 07:46:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.203 07:46:54 version -- scripts/common.sh@355 -- # echo 2 00:06:52.203 07:46:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.203 07:46:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.203 07:46:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.203 07:46:54 version -- scripts/common.sh@368 -- # return 0 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.203 --rc genhtml_branch_coverage=1 00:06:52.203 --rc genhtml_function_coverage=1 00:06:52.203 --rc genhtml_legend=1 00:06:52.203 --rc geninfo_all_blocks=1 00:06:52.203 --rc geninfo_unexecuted_blocks=1 00:06:52.203 00:06:52.203 ' 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.203 --rc genhtml_branch_coverage=1 00:06:52.203 --rc genhtml_function_coverage=1 00:06:52.203 --rc genhtml_legend=1 00:06:52.203 --rc geninfo_all_blocks=1 00:06:52.203 --rc geninfo_unexecuted_blocks=1 00:06:52.203 00:06:52.203 ' 00:06:52.203 07:46:54 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.203 --rc genhtml_branch_coverage=1 00:06:52.203 --rc genhtml_function_coverage=1 00:06:52.203 --rc genhtml_legend=1 00:06:52.203 --rc geninfo_all_blocks=1 00:06:52.204 --rc geninfo_unexecuted_blocks=1 00:06:52.204 00:06:52.204 ' 00:06:52.204 07:46:54 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:52.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.204 --rc genhtml_branch_coverage=1 00:06:52.204 --rc genhtml_function_coverage=1 00:06:52.204 --rc genhtml_legend=1 00:06:52.204 --rc geninfo_all_blocks=1 00:06:52.204 --rc geninfo_unexecuted_blocks=1 00:06:52.204 00:06:52.204 ' 00:06:52.204 07:46:54 version -- app/version.sh@17 -- # get_header_version major 00:06:52.204 07:46:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # cut -f2 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.204 07:46:54 version -- app/version.sh@17 -- # major=25 00:06:52.204 07:46:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:52.204 07:46:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # cut -f2 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.204 07:46:54 version -- app/version.sh@18 -- # minor=1 00:06:52.204 07:46:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:52.204 07:46:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # cut -f2 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.204 07:46:54 version -- app/version.sh@19 -- # patch=0 00:06:52.204 07:46:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:52.204 07:46:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # cut -f2 00:06:52.204 07:46:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.204 07:46:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:52.204 07:46:54 version -- app/version.sh@22 -- # version=25.1 00:06:52.204 07:46:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:52.204 07:46:54 version -- app/version.sh@28 -- # version=25.1rc0 00:06:52.204 07:46:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:52.204 07:46:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:52.204 07:46:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:52.204 07:46:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:52.204 00:06:52.204 real 0m0.239s 00:06:52.204 user 0m0.176s 00:06:52.204 sys 0m0.094s 00:06:52.204 ************************************ 00:06:52.204 END TEST version 00:06:52.204 ************************************ 00:06:52.204 07:46:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.204 07:46:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:52.463 07:46:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:52.463 07:46:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:52.463 07:46:54 -- spdk/autotest.sh@194 -- # uname -s 00:06:52.463 07:46:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:52.463 07:46:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:52.463 07:46:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:52.463 07:46:54 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:52.463 07:46:54 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:52.463 07:46:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.463 07:46:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.463 07:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:52.463 ************************************ 00:06:52.463 START TEST blockdev_nvme 00:06:52.463 ************************************ 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:52.463 * Looking for test storage... 00:06:52.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.463 07:46:54 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.463 --rc genhtml_branch_coverage=1 00:06:52.463 --rc genhtml_function_coverage=1 00:06:52.463 --rc genhtml_legend=1 00:06:52.463 --rc geninfo_all_blocks=1 00:06:52.463 --rc geninfo_unexecuted_blocks=1 00:06:52.463 00:06:52.463 ' 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.463 --rc genhtml_branch_coverage=1 00:06:52.463 --rc genhtml_function_coverage=1 00:06:52.463 --rc genhtml_legend=1 00:06:52.463 --rc geninfo_all_blocks=1 00:06:52.463 --rc geninfo_unexecuted_blocks=1 00:06:52.463 00:06:52.463 ' 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.463 --rc genhtml_branch_coverage=1 00:06:52.463 --rc genhtml_function_coverage=1 00:06:52.463 --rc genhtml_legend=1 00:06:52.463 --rc geninfo_all_blocks=1 00:06:52.463 --rc geninfo_unexecuted_blocks=1 00:06:52.463 00:06:52.463 ' 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.463 --rc genhtml_branch_coverage=1 00:06:52.463 --rc genhtml_function_coverage=1 00:06:52.463 --rc genhtml_legend=1 00:06:52.463 --rc geninfo_all_blocks=1 00:06:52.463 --rc geninfo_unexecuted_blocks=1 00:06:52.463 00:06:52.463 ' 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:52.463 07:46:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61451 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:52.463 07:46:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61451 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 61451 ']' 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.463 07:46:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.722 [2024-10-09 07:46:54.557997] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:52.722 [2024-10-09 07:46:54.558150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61451 ] 00:06:52.722 [2024-10-09 07:46:54.722309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.982 [2024-10-09 07:46:54.962754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.930 07:46:55 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.930 07:46:55 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:53.930 07:46:55 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:53.930 07:46:55 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:53.930 07:46:55 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:53.930 07:46:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:53.930 07:46:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:53.930 07:46:55 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:53.930 07:46:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.930 07:46:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.187 07:46:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.187 07:46:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:54.187 07:46:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.187 07:46:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.187 07:46:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.187 07:46:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:54.187 07:46:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:54.187 07:46:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.187 07:46:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.446 07:46:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.446 07:46:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.446 07:46:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:54.446 07:46:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:54.446 07:46:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.446 07:46:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.446 07:46:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:54.446 07:46:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:54.447 07:46:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8d78b8ac-d2fc-452c-8c46-0c2df76ef0c1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8d78b8ac-d2fc-452c-8c46-0c2df76ef0c1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "260801ec-485b-4000-a931-4b56cd3bfa42"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "260801ec-485b-4000-a931-4b56cd3bfa42",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "47d471c0-c470-4578-975e-2e9aeb3092aa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47d471c0-c470-4578-975e-2e9aeb3092aa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7c88d2a8-1597-41c6-9135-048ebe9e33d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c88d2a8-1597-41c6-9135-048ebe9e33d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a6fe4a9e-aad3-44c0-842e-d4f3e230717a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6fe4a9e-aad3-44c0-842e-d4f3e230717a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fa82d0cc-1be1-46a0-8182-0e4961c07e7a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fa82d0cc-1be1-46a0-8182-0e4961c07e7a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:54.447 07:46:56 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:54.447 07:46:56 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:54.447 07:46:56 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:54.447 07:46:56 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61451 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 61451 ']' 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 61451 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61451 00:06:54.447 killing process with pid 61451 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61451' 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 61451 00:06:54.447 07:46:56 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 61451 00:06:56.976 07:46:58 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:56.976 07:46:58 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.976 07:46:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:56.976 07:46:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.976 07:46:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.976 ************************************ 00:06:56.976 START TEST bdev_hello_world 00:06:56.976 ************************************ 00:06:56.976 07:46:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.976 [2024-10-09 07:46:58.797991] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:56.976 [2024-10-09 07:46:58.798145] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61546 ] 00:06:56.976 [2024-10-09 07:46:58.965906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.234 [2024-10-09 07:46:59.175048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.168 [2024-10-09 07:46:59.821214] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:58.168 [2024-10-09 07:46:59.821298] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:58.168 [2024-10-09 07:46:59.821373] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:58.168 [2024-10-09 07:46:59.825735] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:58.168 [2024-10-09 07:46:59.826360] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:58.168 [2024-10-09 07:46:59.826429] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:58.168 [2024-10-09 07:46:59.826625] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:58.168 00:06:58.168 [2024-10-09 07:46:59.826689] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:59.136 00:06:59.136 real 0m2.378s 00:06:59.136 user 0m2.022s 00:06:59.136 sys 0m0.237s 00:06:59.136 07:47:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.136 ************************************ 00:06:59.136 END TEST bdev_hello_world 00:06:59.136 07:47:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:59.136 ************************************ 00:06:59.136 07:47:01 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:59.136 07:47:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.136 07:47:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.136 07:47:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.136 ************************************ 00:06:59.136 START TEST bdev_bounds 00:06:59.136 ************************************ 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61599 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61599' 00:06:59.136 Process bdevio pid: 61599 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61599 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61599 ']' 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.136 07:47:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.394 [2024-10-09 07:47:01.203227] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:06:59.394 [2024-10-09 07:47:01.203914] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61599 ] 00:06:59.394 [2024-10-09 07:47:01.365798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.653 [2024-10-09 07:47:01.557262] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.653 [2024-10-09 07:47:01.557367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.653 [2024-10-09 07:47:01.557376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.588 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.588 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:00.588 07:47:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.588 I/O targets: 00:07:00.588 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.588 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:00.588 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.588 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.588 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.588 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.588 00:07:00.588 00:07:00.588 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.588 http://cunit.sourceforge.net/ 00:07:00.588 00:07:00.588 00:07:00.588 Suite: bdevio tests on: Nvme3n1 00:07:00.588 Test: blockdev write read block ...passed 00:07:00.588 Test: blockdev write zeroes read block ...passed 00:07:00.588 Test: blockdev write zeroes read no split ...passed 00:07:00.588 Test: blockdev write zeroes read split ...passed 00:07:00.588 Test: blockdev write zeroes read split partial ...passed 00:07:00.588 Test: blockdev reset ...[2024-10-09 07:47:02.511903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:00.588 [2024-10-09 07:47:02.516184] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.588 passed 00:07:00.588 Test: blockdev write read 8 blocks ...passed 00:07:00.588 Test: blockdev write read size > 128k ...passed 00:07:00.588 Test: blockdev write read invalid size ...passed 00:07:00.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.588 Test: blockdev write read max offset ...passed 00:07:00.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.588 Test: blockdev writev readv 8 blocks ...passed 00:07:00.588 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.588 Test: blockdev writev readv block ...passed 00:07:00.588 Test: blockdev writev readv size > 128k ...passed 00:07:00.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.588 Test: blockdev comparev and writev ...[2024-10-09 07:47:02.524081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c260a000 len:0x1000 00:07:00.589 [2024-10-09 07:47:02.524179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.589 passed 00:07:00.589 Test: blockdev nvme passthru rw ...passed 00:07:00.589 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.589 Test: blockdev nvme admin passthru ...[2024-10-09 07:47:02.524849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.589 [2024-10-09 07:47:02.524901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.589 passed 00:07:00.589 Test: blockdev copy ...passed 00:07:00.589 Suite: bdevio tests on: Nvme2n3 00:07:00.589 Test: blockdev write read block ...passed 00:07:00.589 Test: blockdev write zeroes read block ...passed 00:07:00.589 Test: blockdev write zeroes read no split ...passed 00:07:00.589 Test: blockdev write zeroes read split ...passed 00:07:00.589 Test: blockdev write zeroes read split partial ...passed 00:07:00.589 Test: blockdev reset ...[2024-10-09 07:47:02.595083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.847 passed 00:07:00.847 Test: blockdev write read 8 blocks ...[2024-10-09 07:47:02.599170] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.847 passed 00:07:00.847 Test: blockdev write read size > 128k ...passed 00:07:00.847 Test: blockdev write read invalid size ...passed 00:07:00.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.848 Test: blockdev write read max offset ...passed 00:07:00.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.848 Test: blockdev writev readv 8 blocks ...passed 00:07:00.848 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.848 Test: blockdev writev readv block ...passed 00:07:00.848 Test: blockdev writev readv size > 128k ...passed 00:07:00.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.848 Test: blockdev comparev and writev ...[2024-10-09 07:47:02.605980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:00.848 Test: blockdev nvme passthru rw ...passed 00:07:00.848 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2a6604000 len:0x1000 00:07:00.848 [2024-10-09 07:47:02.606189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.848 [2024-10-09 07:47:02.606809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.848 [2024-10-09 07:47:02.606853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.848 passed 00:07:00.848 Test: blockdev nvme admin passthru ...passed 00:07:00.848 Test: blockdev copy ...passed 00:07:00.848 Suite: bdevio tests on: Nvme2n2 00:07:00.848 Test: blockdev write read block ...passed 00:07:00.848 Test: blockdev write zeroes read block ...passed 00:07:00.848 Test: blockdev write zeroes read no split ...passed 00:07:00.848 Test: blockdev write zeroes read split ...passed 00:07:00.848 Test: blockdev write zeroes read split partial ...passed 00:07:00.848 Test: blockdev reset ...[2024-10-09 07:47:02.674253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.848 [2024-10-09 07:47:02.678369] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.848 passed 00:07:00.848 Test: blockdev write read 8 blocks ...passed 00:07:00.848 Test: blockdev write read size > 128k ...passed 00:07:00.848 Test: blockdev write read invalid size ...passed 00:07:00.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.848 Test: blockdev write read max offset ...passed 00:07:00.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.848 Test: blockdev writev readv 8 blocks ...passed 00:07:00.848 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.848 Test: blockdev writev readv block ...passed 00:07:00.848 Test: blockdev writev readv size > 128k ...passed 00:07:00.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.848 Test: blockdev comparev and writev ...[2024-10-09 07:47:02.687140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d723a000 len:0x1000 00:07:00.848 [2024-10-09 07:47:02.687248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.848 passed 00:07:00.848 Test: blockdev nvme passthru rw ...passed 00:07:00.848 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.848 Test: blockdev nvme admin passthru ...[2024-10-09 07:47:02.688040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.848 [2024-10-09 07:47:02.688089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.848 passed 00:07:00.848 Test: blockdev copy ...passed 00:07:00.848 Suite: bdevio tests on: Nvme2n1 00:07:00.848 Test: blockdev write read block ...passed 00:07:00.848 Test: blockdev write zeroes read block ...passed 00:07:00.848 Test: blockdev write zeroes read no split ...passed 00:07:00.848 Test: blockdev write zeroes read split ...passed 00:07:00.848 Test: blockdev write zeroes read split partial ...passed 00:07:00.848 Test: blockdev reset ...[2024-10-09 07:47:02.754025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.848 [2024-10-09 07:47:02.758519] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.848 passed 00:07:00.848 Test: blockdev write read 8 blocks ...passed 00:07:00.848 Test: blockdev write read size > 128k ...passed 00:07:00.848 Test: blockdev write read invalid size ...passed 00:07:00.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.848 Test: blockdev write read max offset ...passed 00:07:00.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.848 Test: blockdev writev readv 8 blocks ...passed 00:07:00.848 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.848 Test: blockdev writev readv block ...passed 00:07:00.848 Test: blockdev writev readv size > 128k ...passed 00:07:00.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.848 Test: blockdev comparev and writev ...[2024-10-09 07:47:02.767182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7234000 len:0x1000 00:07:00.848 [2024-10-09 07:47:02.767260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.848 passed 00:07:00.848 Test: blockdev nvme passthru rw ...passed 00:07:00.848 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.848 Test: blockdev nvme admin passthru ...[2024-10-09 07:47:02.768064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.848 [2024-10-09 07:47:02.768114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.848 passed 00:07:00.848 Test: blockdev copy ...passed 00:07:00.848 Suite: bdevio tests on: Nvme1n1 00:07:00.848 Test: blockdev write read block ...passed 00:07:00.848 Test: blockdev write zeroes read block ...passed 00:07:00.848 Test: blockdev write zeroes read no split ...passed 00:07:00.848 Test: blockdev write zeroes read split ...passed 00:07:00.848 Test: blockdev write zeroes read split partial ...passed 00:07:00.848 Test: blockdev reset ...[2024-10-09 07:47:02.836659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:00.848 [2024-10-09 07:47:02.840341] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.848 passed 00:07:00.848 Test: blockdev write read 8 blocks ...passed 00:07:00.848 Test: blockdev write read size > 128k ...passed 00:07:00.848 Test: blockdev write read invalid size ...passed 00:07:00.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.848 Test: blockdev write read max offset ...passed 00:07:00.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.848 Test: blockdev writev readv 8 blocks ...passed 00:07:00.848 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.848 Test: blockdev writev readv block ...passed 00:07:00.848 Test: blockdev writev readv size > 128k ...passed 00:07:00.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.848 Test: blockdev comparev and writev ...[2024-10-09 07:47:02.848581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:00.848 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d7230000 len:0x1000 00:07:00.848 [2024-10-09 07:47:02.848796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.848 passed 00:07:00.848 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.848 Test: blockdev nvme admin passthru ...[2024-10-09 07:47:02.849788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.848 [2024-10-09 07:47:02.849845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.848 passed 00:07:00.848 Test: blockdev copy ...passed 00:07:00.848 Suite: bdevio tests on: Nvme0n1 00:07:00.848 Test: blockdev write read block ...passed 00:07:01.107 Test: blockdev write zeroes read block ...passed 00:07:01.107 Test: blockdev write zeroes read no split ...passed 00:07:01.107 Test: blockdev write zeroes read split ...passed 00:07:01.107 Test: blockdev write zeroes read split partial ...passed 00:07:01.107 Test: blockdev reset ...[2024-10-09 07:47:02.916613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:01.107 [2024-10-09 07:47:02.920371] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.107 passed 00:07:01.107 Test: blockdev write read 8 blocks ...passed 00:07:01.107 Test: blockdev write read size > 128k ...passed 00:07:01.107 Test: blockdev write read invalid size ...passed 00:07:01.107 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.107 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.107 Test: blockdev write read max offset ...passed 00:07:01.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.107 Test: blockdev writev readv 8 blocks ...passed 00:07:01.107 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.107 Test: blockdev writev readv block ...passed 00:07:01.107 Test: blockdev writev readv size > 128k ...passed 00:07:01.107 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.107 Test: blockdev comparev and writev ...passed 00:07:01.107 Test: blockdev nvme passthru rw ...[2024-10-09 07:47:02.929071] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.107 separate metadata which is not supported yet. 00:07:01.107 passed 00:07:01.107 Test: blockdev nvme passthru vendor specific ...[2024-10-09 07:47:02.929659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:01.107 passed 00:07:01.107 Test: blockdev nvme admin passthru ...[2024-10-09 07:47:02.929719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.107 passed 00:07:01.107 Test: blockdev copy ...passed 00:07:01.107 00:07:01.107 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.107 suites 6 6 n/a 0 0 00:07:01.107 tests 138 138 138 0 0 00:07:01.107 asserts 893 893 893 0 n/a 00:07:01.107 00:07:01.107 Elapsed time = 1.306 seconds 00:07:01.107 0 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61599 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61599 ']' 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61599 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61599 00:07:01.107 killing process with pid 61599 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61599' 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61599 00:07:01.107 07:47:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61599 00:07:02.041 07:47:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:02.041 00:07:02.041 real 0m2.890s 00:07:02.041 user 0m7.417s 00:07:02.041 sys 0m0.358s 00:07:02.041 07:47:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.041 07:47:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:02.041 ************************************ 00:07:02.041 END TEST bdev_bounds 00:07:02.041 ************************************ 00:07:02.042 07:47:04 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.042 07:47:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:02.042 07:47:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.042 07:47:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.042 ************************************ 00:07:02.042 START TEST bdev_nbd 00:07:02.042 ************************************ 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.042 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:02.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.299 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61659 00:07:02.299 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.299 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:02.299 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61659 /var/tmp/spdk-nbd.sock 00:07:02.299 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61659 ']' 00:07:02.299 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.300 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.300 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.300 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.300 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.300 [2024-10-09 07:47:04.156856] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:02.300 [2024-10-09 07:47:04.157014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.558 [2024-10-09 07:47:04.323139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.558 [2024-10-09 07:47:04.516748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.493 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.751 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.752 1+0 records in 00:07:03.752 1+0 records out 00:07:03.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671192 s, 6.1 MB/s 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.752 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.009 1+0 records in 00:07:04.009 1+0 records out 00:07:04.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702302 s, 5.8 MB/s 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.009 07:47:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.576 1+0 records in 00:07:04.576 1+0 records out 00:07:04.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570592 s, 7.2 MB/s 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.576 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.877 1+0 records in 00:07:04.877 1+0 records out 00:07:04.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742086 s, 5.5 MB/s 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.877 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.135 1+0 records in 00:07:05.135 1+0 records out 00:07:05.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717632 s, 5.7 MB/s 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.135 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.136 07:47:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.394 1+0 records in 00:07:05.394 1+0 records out 00:07:05.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733188 s, 5.6 MB/s 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.394 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.395 07:47:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.395 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.395 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.395 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd0", 00:07:05.964 "bdev_name": "Nvme0n1" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd1", 00:07:05.964 "bdev_name": "Nvme1n1" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd2", 00:07:05.964 "bdev_name": "Nvme2n1" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd3", 00:07:05.964 "bdev_name": "Nvme2n2" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd4", 00:07:05.964 "bdev_name": "Nvme2n3" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd5", 00:07:05.964 "bdev_name": "Nvme3n1" 00:07:05.964 } 00:07:05.964 ]' 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd0", 00:07:05.964 "bdev_name": "Nvme0n1" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd1", 00:07:05.964 "bdev_name": "Nvme1n1" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd2", 00:07:05.964 "bdev_name": "Nvme2n1" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd3", 00:07:05.964 "bdev_name": "Nvme2n2" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd4", 00:07:05.964 "bdev_name": "Nvme2n3" 00:07:05.964 }, 00:07:05.964 { 00:07:05.964 "nbd_device": "/dev/nbd5", 00:07:05.964 "bdev_name": "Nvme3n1" 00:07:05.964 } 00:07:05.964 ]' 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.964 07:47:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.531 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.789 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.047 07:47:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.614 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:07.872 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.873 07:47:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.132 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.700 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:08.959 /dev/nbd0 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.959 1+0 records in 00:07:08.959 1+0 records out 00:07:08.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841216 s, 4.9 MB/s 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.959 07:47:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:09.526 /dev/nbd1 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.526 1+0 records in 00:07:09.526 1+0 records out 00:07:09.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643791 s, 6.4 MB/s 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.526 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:09.785 /dev/nbd10 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.785 1+0 records in 00:07:09.785 1+0 records out 00:07:09.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623147 s, 6.6 MB/s 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.785 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:10.044 /dev/nbd11 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.044 1+0 records in 00:07:10.044 1+0 records out 00:07:10.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629256 s, 6.5 MB/s 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.044 07:47:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:10.303 /dev/nbd12 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.303 1+0 records in 00:07:10.303 1+0 records out 00:07:10.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061342 s, 6.7 MB/s 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.303 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:10.562 /dev/nbd13 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.562 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.820 1+0 records in 00:07:10.820 1+0 records out 00:07:10.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861498 s, 4.8 MB/s 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.820 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.821 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.079 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd0", 00:07:11.079 "bdev_name": "Nvme0n1" 00:07:11.079 }, 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd1", 00:07:11.079 "bdev_name": "Nvme1n1" 00:07:11.079 }, 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd10", 00:07:11.079 "bdev_name": "Nvme2n1" 00:07:11.079 }, 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd11", 00:07:11.079 "bdev_name": "Nvme2n2" 00:07:11.079 }, 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd12", 00:07:11.079 "bdev_name": "Nvme2n3" 00:07:11.079 }, 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd13", 00:07:11.079 "bdev_name": "Nvme3n1" 00:07:11.079 } 00:07:11.079 ]' 00:07:11.079 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd0", 00:07:11.079 "bdev_name": "Nvme0n1" 00:07:11.079 }, 00:07:11.079 { 00:07:11.079 "nbd_device": "/dev/nbd1", 00:07:11.079 "bdev_name": "Nvme1n1" 00:07:11.079 }, 00:07:11.079 { 00:07:11.080 "nbd_device": "/dev/nbd10", 00:07:11.080 "bdev_name": "Nvme2n1" 00:07:11.080 }, 00:07:11.080 { 00:07:11.080 "nbd_device": "/dev/nbd11", 00:07:11.080 "bdev_name": "Nvme2n2" 00:07:11.080 }, 00:07:11.080 { 00:07:11.080 "nbd_device": "/dev/nbd12", 00:07:11.080 "bdev_name": "Nvme2n3" 00:07:11.080 }, 00:07:11.080 { 00:07:11.080 "nbd_device": "/dev/nbd13", 00:07:11.080 "bdev_name": "Nvme3n1" 00:07:11.080 } 00:07:11.080 ]' 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.080 /dev/nbd1 00:07:11.080 /dev/nbd10 00:07:11.080 /dev/nbd11 00:07:11.080 /dev/nbd12 00:07:11.080 /dev/nbd13' 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.080 /dev/nbd1 00:07:11.080 /dev/nbd10 00:07:11.080 /dev/nbd11 00:07:11.080 /dev/nbd12 00:07:11.080 /dev/nbd13' 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:11.080 256+0 records in 00:07:11.080 256+0 records out 00:07:11.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00759304 s, 138 MB/s 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.080 07:47:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.339 256+0 records in 00:07:11.339 256+0 records out 00:07:11.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122741 s, 8.5 MB/s 00:07:11.339 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.339 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.339 256+0 records in 00:07:11.339 256+0 records out 00:07:11.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128762 s, 8.1 MB/s 00:07:11.339 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.339 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:11.598 256+0 records in 00:07:11.598 256+0 records out 00:07:11.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127358 s, 8.2 MB/s 00:07:11.598 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.598 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:11.598 256+0 records in 00:07:11.598 256+0 records out 00:07:11.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121932 s, 8.6 MB/s 00:07:11.598 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.598 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:11.598 256+0 records in 00:07:11.598 256+0 records out 00:07:11.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114281 s, 9.2 MB/s 00:07:11.598 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.598 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:11.856 256+0 records in 00:07:11.856 256+0 records out 00:07:11.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128168 s, 8.2 MB/s 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.856 07:47:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:12.792 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.792 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:12.792 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.792 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.051 07:47:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.310 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.568 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.568 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.568 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.568 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.569 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.569 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.569 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.569 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.569 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.569 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.134 07:47:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.392 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.651 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.958 07:47:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:15.534 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:15.795 malloc_lvol_verify 00:07:15.795 07:47:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:16.361 bd6a0499-d3d9-4f97-b923-fac29433fe9e 00:07:16.361 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:16.619 60009efe-5f53-4fac-ab8d-378ce4854c5c 00:07:16.619 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:16.877 /dev/nbd0 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:16.877 mke2fs 1.47.0 (5-Feb-2023) 00:07:16.877 Discarding device blocks: 0/4096 done 00:07:16.877 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:16.877 00:07:16.877 Allocating group tables: 0/1 done 00:07:16.877 Writing inode tables: 0/1 done 00:07:16.877 Creating journal (1024 blocks): done 00:07:16.877 Writing superblocks and filesystem accounting information: 0/1 done 00:07:16.877 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.877 07:47:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61659 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61659 ']' 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61659 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61659 00:07:17.442 killing process with pid 61659 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61659' 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61659 00:07:17.442 07:47:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61659 00:07:18.816 07:47:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:18.816 00:07:18.816 real 0m16.702s 00:07:18.816 user 0m23.789s 00:07:18.816 sys 0m5.002s 00:07:18.816 07:47:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.816 07:47:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:18.816 ************************************ 00:07:18.816 END TEST bdev_nbd 00:07:18.816 ************************************ 00:07:18.816 07:47:20 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:18.816 skipping fio tests on NVMe due to multi-ns failures. 00:07:18.816 07:47:20 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:18.816 07:47:20 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:18.816 07:47:20 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:18.816 07:47:20 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:18.816 07:47:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:18.816 07:47:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.816 07:47:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.816 ************************************ 00:07:18.816 START TEST bdev_verify 00:07:18.816 ************************************ 00:07:18.816 07:47:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:19.074 [2024-10-09 07:47:20.909028] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:19.074 [2024-10-09 07:47:20.909234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62108 ] 00:07:19.332 [2024-10-09 07:47:21.102119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.332 [2024-10-09 07:47:21.294205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.332 [2024-10-09 07:47:21.294208] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.265 Running I/O for 5 seconds... 00:07:22.573 17728.00 IOPS, 69.25 MiB/s [2024-10-09T07:47:25.520Z] 18528.00 IOPS, 72.38 MiB/s [2024-10-09T07:47:26.452Z] 18517.33 IOPS, 72.33 MiB/s [2024-10-09T07:47:27.387Z] 18608.00 IOPS, 72.69 MiB/s [2024-10-09T07:47:27.387Z] 18457.60 IOPS, 72.10 MiB/s 00:07:25.375 Latency(us) 00:07:25.375 [2024-10-09T07:47:27.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.375 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x0 length 0xbd0bd 00:07:25.375 Nvme0n1 : 5.07 1516.26 5.92 0.00 0.00 84226.93 17158.52 74830.20 00:07:25.375 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:25.375 Nvme0n1 : 5.08 1512.12 5.91 0.00 0.00 84443.40 16324.42 125829.12 00:07:25.375 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x0 length 0xa0000 00:07:25.375 Nvme1n1 : 5.07 1515.63 5.92 0.00 0.00 84134.62 19541.64 70540.57 00:07:25.375 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0xa0000 length 0xa0000 00:07:25.375 Nvme1n1 : 5.08 1511.52 5.90 0.00 0.00 84202.52 16801.05 120109.61 00:07:25.375 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x0 length 0x80000 00:07:25.375 Nvme2n1 : 5.07 1515.03 5.92 0.00 0.00 84013.78 18350.08 68157.44 00:07:25.375 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x80000 length 0x80000 00:07:25.375 Nvme2n1 : 5.08 1510.93 5.90 0.00 0.00 83877.04 16801.05 117249.86 00:07:25.375 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x0 length 0x80000 00:07:25.375 Nvme2n2 : 5.07 1514.38 5.92 0.00 0.00 83908.33 17635.14 69587.32 00:07:25.375 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x80000 length 0x80000 00:07:25.375 Nvme2n2 : 5.08 1510.39 5.90 0.00 0.00 83623.56 16801.05 122969.37 00:07:25.375 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x0 length 0x80000 00:07:25.375 Nvme2n3 : 5.07 1513.68 5.91 0.00 0.00 83788.68 16443.58 71493.82 00:07:25.375 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x80000 length 0x80000 00:07:25.375 Nvme2n3 : 5.09 1509.84 5.90 0.00 0.00 83413.56 16086.11 128688.87 00:07:25.375 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x0 length 0x20000 00:07:25.375 Nvme3n1 : 5.08 1513.11 5.91 0.00 0.00 83681.57 9711.24 74830.20 00:07:25.375 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.375 Verification LBA range: start 0x20000 length 0x20000 00:07:25.375 Nvme3n1 : 5.11 1527.04 5.96 0.00 0.00 82267.41 7208.96 128688.87 00:07:25.375 [2024-10-09T07:47:27.387Z] =================================================================================================================== 00:07:25.375 [2024-10-09T07:47:27.387Z] Total : 18169.93 70.98 0.00 0.00 83796.33 7208.96 128688.87 00:07:26.777 00:07:26.777 real 0m7.928s 00:07:26.777 user 0m14.350s 00:07:26.777 sys 0m0.286s 00:07:26.777 07:47:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.777 07:47:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:26.777 ************************************ 00:07:26.777 END TEST bdev_verify 00:07:26.777 ************************************ 00:07:26.777 07:47:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:26.777 07:47:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:26.777 07:47:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.777 07:47:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.777 ************************************ 00:07:26.777 START TEST bdev_verify_big_io 00:07:26.777 ************************************ 00:07:26.777 07:47:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:27.036 [2024-10-09 07:47:28.892592] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:27.036 [2024-10-09 07:47:28.892821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62212 ] 00:07:27.294 [2024-10-09 07:47:29.075393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.294 [2024-10-09 07:47:29.269098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.294 [2024-10-09 07:47:29.269111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.228 Running I/O for 5 seconds... 00:07:32.884 645.00 IOPS, 40.31 MiB/s [2024-10-09T07:47:36.280Z] 2131.00 IOPS, 133.19 MiB/s [2024-10-09T07:47:36.280Z] 2794.00 IOPS, 174.62 MiB/s 00:07:34.268 Latency(us) 00:07:34.268 [2024-10-09T07:47:36.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.268 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x0 length 0xbd0b 00:07:34.268 Nvme0n1 : 5.74 139.13 8.70 0.00 0.00 879512.61 20256.58 1021884.97 00:07:34.268 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:34.268 Nvme0n1 : 5.68 112.64 7.04 0.00 0.00 1096754.83 28001.75 1037136.99 00:07:34.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x0 length 0xa000 00:07:34.268 Nvme1n1 : 5.74 138.51 8.66 0.00 0.00 850019.83 79596.45 838860.80 00:07:34.268 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0xa000 length 0xa000 00:07:34.268 Nvme1n1 : 5.69 112.55 7.03 0.00 0.00 1063571.83 139174.63 1082893.03 00:07:34.268 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x0 length 0x8000 00:07:34.268 Nvme2n1 : 5.90 146.85 9.18 0.00 0.00 790231.79 37891.72 713031.68 00:07:34.268 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x8000 length 0x8000 00:07:34.268 Nvme2n1 : 5.77 115.84 7.24 0.00 0.00 1005882.35 77213.32 1121023.07 00:07:34.268 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x0 length 0x8000 00:07:34.268 Nvme2n2 : 5.85 147.63 9.23 0.00 0.00 767953.39 34793.66 751161.72 00:07:34.268 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x8000 length 0x8000 00:07:34.268 Nvme2n2 : 5.81 121.25 7.58 0.00 0.00 939216.69 35270.28 1159153.11 00:07:34.268 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x0 length 0x8000 00:07:34.268 Nvme2n3 : 5.90 151.84 9.49 0.00 0.00 725171.93 41943.04 777852.74 00:07:34.268 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x8000 length 0x8000 00:07:34.268 Nvme2n3 : 5.87 126.34 7.90 0.00 0.00 872062.25 21209.83 1197283.14 00:07:34.268 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x0 length 0x2000 00:07:34.268 Nvme3n1 : 5.94 166.88 10.43 0.00 0.00 642822.19 6821.70 808356.77 00:07:34.268 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.268 Verification LBA range: start 0x2000 length 0x2000 00:07:34.268 Nvme3n1 : 5.92 147.25 9.20 0.00 0.00 731481.13 3247.01 1235413.18 00:07:34.268 [2024-10-09T07:47:36.280Z] =================================================================================================================== 00:07:34.268 [2024-10-09T07:47:36.280Z] Total : 1626.71 101.67 0.00 0.00 845590.31 3247.01 1235413.18 00:07:36.172 00:07:36.172 real 0m9.132s 00:07:36.172 user 0m16.725s 00:07:36.172 sys 0m0.348s 00:07:36.172 07:47:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.172 07:47:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:36.172 ************************************ 00:07:36.172 END TEST bdev_verify_big_io 00:07:36.172 ************************************ 00:07:36.172 07:47:37 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.172 07:47:37 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:36.172 07:47:37 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.172 07:47:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.172 ************************************ 00:07:36.173 START TEST bdev_write_zeroes 00:07:36.173 ************************************ 00:07:36.173 07:47:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.173 [2024-10-09 07:47:38.049745] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:36.173 [2024-10-09 07:47:38.049946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62332 ] 00:07:36.432 [2024-10-09 07:47:38.226707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.691 [2024-10-09 07:47:38.458290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.258 Running I/O for 1 seconds... 00:07:38.203 42624.00 IOPS, 166.50 MiB/s 00:07:38.203 Latency(us) 00:07:38.203 [2024-10-09T07:47:40.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.203 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.203 Nvme0n1 : 1.03 7090.59 27.70 0.00 0.00 17994.44 10366.60 32648.84 00:07:38.203 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.203 Nvme1n1 : 1.03 7076.38 27.64 0.00 0.00 18004.02 11677.32 29074.15 00:07:38.203 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.203 Nvme2n1 : 1.03 7062.53 27.59 0.00 0.00 17937.35 8996.31 27286.81 00:07:38.203 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.203 Nvme2n2 : 1.04 7048.93 27.53 0.00 0.00 17940.21 9175.04 26810.18 00:07:38.203 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.204 Nvme2n3 : 1.04 7038.42 27.49 0.00 0.00 17938.42 10128.29 28240.06 00:07:38.204 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.204 Nvme3n1 : 1.04 7027.51 27.45 0.00 0.00 17933.99 11081.54 28835.84 00:07:38.204 [2024-10-09T07:47:40.216Z] =================================================================================================================== 00:07:38.204 [2024-10-09T07:47:40.216Z] Total : 42344.35 165.41 0.00 0.00 17958.07 8996.31 32648.84 00:07:39.579 00:07:39.579 real 0m3.510s 00:07:39.579 user 0m3.110s 00:07:39.579 sys 0m0.267s 00:07:39.579 07:47:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.579 07:47:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:39.579 ************************************ 00:07:39.579 END TEST bdev_write_zeroes 00:07:39.579 ************************************ 00:07:39.579 07:47:41 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.579 07:47:41 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:39.579 07:47:41 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.579 07:47:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.579 ************************************ 00:07:39.579 START TEST bdev_json_nonenclosed 00:07:39.579 ************************************ 00:07:39.579 07:47:41 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.837 [2024-10-09 07:47:41.611702] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:39.837 [2024-10-09 07:47:41.611882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62385 ] 00:07:39.837 [2024-10-09 07:47:41.782718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.096 [2024-10-09 07:47:42.010305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.096 [2024-10-09 07:47:42.010452] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:40.096 [2024-10-09 07:47:42.010487] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.096 [2024-10-09 07:47:42.010504] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.664 00:07:40.664 real 0m0.941s 00:07:40.664 user 0m0.679s 00:07:40.664 sys 0m0.155s 00:07:40.664 07:47:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.664 07:47:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:40.664 ************************************ 00:07:40.664 END TEST bdev_json_nonenclosed 00:07:40.664 ************************************ 00:07:40.664 07:47:42 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.664 07:47:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:40.664 07:47:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.664 07:47:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.664 ************************************ 00:07:40.664 START TEST bdev_json_nonarray 00:07:40.664 ************************************ 00:07:40.664 07:47:42 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.664 [2024-10-09 07:47:42.583079] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:40.664 [2024-10-09 07:47:42.583241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62416 ] 00:07:40.923 [2024-10-09 07:47:42.747730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.182 [2024-10-09 07:47:42.944655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.182 [2024-10-09 07:47:42.944775] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:41.182 [2024-10-09 07:47:42.944805] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.182 [2024-10-09 07:47:42.944819] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.476 00:07:41.476 real 0m0.866s 00:07:41.476 user 0m0.632s 00:07:41.476 sys 0m0.127s 00:07:41.476 07:47:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.476 07:47:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:41.476 ************************************ 00:07:41.476 END TEST bdev_json_nonarray 00:07:41.476 ************************************ 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:41.476 07:47:43 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:41.476 00:07:41.476 real 0m49.158s 00:07:41.476 user 1m13.747s 00:07:41.476 sys 0m7.612s 00:07:41.476 07:47:43 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.476 07:47:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.476 ************************************ 00:07:41.476 END TEST blockdev_nvme 00:07:41.476 ************************************ 00:07:41.476 07:47:43 -- spdk/autotest.sh@209 -- # uname -s 00:07:41.476 07:47:43 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:41.476 07:47:43 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:41.476 07:47:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.476 07:47:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.476 07:47:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.476 ************************************ 00:07:41.476 START TEST blockdev_nvme_gpt 00:07:41.476 ************************************ 00:07:41.476 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:41.734 * Looking for test storage... 00:07:41.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.734 07:47:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.734 --rc genhtml_branch_coverage=1 00:07:41.734 --rc genhtml_function_coverage=1 00:07:41.734 --rc genhtml_legend=1 00:07:41.734 --rc geninfo_all_blocks=1 00:07:41.734 --rc geninfo_unexecuted_blocks=1 00:07:41.734 00:07:41.734 ' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.734 --rc genhtml_branch_coverage=1 00:07:41.734 --rc genhtml_function_coverage=1 00:07:41.734 --rc genhtml_legend=1 00:07:41.734 --rc geninfo_all_blocks=1 00:07:41.734 --rc geninfo_unexecuted_blocks=1 00:07:41.734 00:07:41.734 ' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.734 --rc genhtml_branch_coverage=1 00:07:41.734 --rc genhtml_function_coverage=1 00:07:41.734 --rc genhtml_legend=1 00:07:41.734 --rc geninfo_all_blocks=1 00:07:41.734 --rc geninfo_unexecuted_blocks=1 00:07:41.734 00:07:41.734 ' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.734 --rc genhtml_branch_coverage=1 00:07:41.734 --rc genhtml_function_coverage=1 00:07:41.734 --rc genhtml_legend=1 00:07:41.734 --rc geninfo_all_blocks=1 00:07:41.734 --rc geninfo_unexecuted_blocks=1 00:07:41.734 00:07:41.734 ' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62500 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62500 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 62500 ']' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.734 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.993 [2024-10-09 07:47:43.802317] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:41.993 [2024-10-09 07:47:43.802488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62500 ] 00:07:41.993 [2024-10-09 07:47:43.962924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.251 [2024-10-09 07:47:44.155346] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.187 07:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.187 07:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:43.187 07:47:44 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:43.187 07:47:44 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:43.187 07:47:44 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:43.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.445 Waiting for block devices as requested 00:07:43.445 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.703 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.703 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.703 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.965 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:48.965 BYT; 00:07:48.965 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:48.965 BYT; 00:07:48.965 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:48.965 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:48.965 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:48.965 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:48.965 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.965 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:48.965 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:48.965 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:48.966 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:48.966 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:48.966 07:47:50 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:48.966 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:48.966 07:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:50.338 The operation has completed successfully. 00:07:50.338 07:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:51.273 The operation has completed successfully. 00:07:51.273 07:47:52 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:51.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:52.096 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.096 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.096 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.354 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.354 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:52.354 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.354 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.354 [] 00:07:52.354 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.354 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:52.354 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:52.354 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:52.354 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:52.354 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:52.354 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.354 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.611 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.611 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:52.611 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.611 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.611 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.611 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.869 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.869 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:52.869 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:52.869 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.869 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:52.869 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.869 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.869 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:52.869 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:52.870 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "beedfd8d-8755-4db6-b4d5-e7223140e7e1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "beedfd8d-8755-4db6-b4d5-e7223140e7e1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ba6c1ce0-8b38-42ca-8eee-1ef02150ddb2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba6c1ce0-8b38-42ca-8eee-1ef02150ddb2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1fbbf458-8878-4a52-a627-b568b0eb057c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1fbbf458-8878-4a52-a627-b568b0eb057c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6610b5b8-a040-415d-9f28-3cf6608c9a3d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6610b5b8-a040-415d-9f28-3cf6608c9a3d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "66f557fa-4ca7-48c1-83db-f51829f011aa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "66f557fa-4ca7-48c1-83db-f51829f011aa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:52.870 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:52.870 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:52.870 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:52.870 07:47:54 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62500 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 62500 ']' 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 62500 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62500 00:07:52.870 killing process with pid 62500 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62500' 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 62500 00:07:52.870 07:47:54 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 62500 00:07:55.420 07:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:55.420 07:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:55.420 07:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:55.420 07:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.420 07:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.420 ************************************ 00:07:55.420 START TEST bdev_hello_world 00:07:55.420 ************************************ 00:07:55.420 07:47:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:55.420 [2024-10-09 07:47:57.151046] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:55.420 [2024-10-09 07:47:57.151603] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63137 ] 00:07:55.420 [2024-10-09 07:47:57.314461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.679 [2024-10-09 07:47:57.549785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.245 [2024-10-09 07:47:58.170450] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:56.245 [2024-10-09 07:47:58.170525] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:56.245 [2024-10-09 07:47:58.170563] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:56.245 [2024-10-09 07:47:58.173780] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:56.245 [2024-10-09 07:47:58.174385] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:56.245 [2024-10-09 07:47:58.174433] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:56.245 [2024-10-09 07:47:58.174741] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:56.245 00:07:56.245 [2024-10-09 07:47:58.174797] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:57.622 00:07:57.622 real 0m2.299s 00:07:57.622 user 0m1.957s 00:07:57.622 sys 0m0.226s 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.622 ************************************ 00:07:57.622 END TEST bdev_hello_world 00:07:57.622 ************************************ 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:57.622 07:47:59 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:57.622 07:47:59 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.622 07:47:59 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.622 07:47:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.622 ************************************ 00:07:57.622 START TEST bdev_bounds 00:07:57.622 ************************************ 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63185 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.622 Process bdevio pid: 63185 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63185' 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63185 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 63185 ']' 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.622 07:47:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:57.622 [2024-10-09 07:47:59.498763] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:07:57.622 [2024-10-09 07:47:59.498940] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63185 ] 00:07:57.880 [2024-10-09 07:47:59.661962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.880 [2024-10-09 07:47:59.855972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.880 [2024-10-09 07:47:59.856098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.880 [2024-10-09 07:47:59.856104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.815 07:48:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.815 07:48:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:58.815 07:48:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:58.815 I/O targets: 00:07:58.815 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:58.815 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:58.815 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:58.815 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.815 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.815 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.815 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:58.815 00:07:58.815 00:07:58.815 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.815 http://cunit.sourceforge.net/ 00:07:58.815 00:07:58.815 00:07:58.815 Suite: bdevio tests on: Nvme3n1 00:07:58.815 Test: blockdev write read block ...passed 00:07:58.815 Test: blockdev write zeroes read block ...passed 00:07:58.815 Test: blockdev write zeroes read no split ...passed 00:07:58.815 Test: blockdev write zeroes read split ...passed 00:07:58.815 Test: blockdev write zeroes read split partial ...passed 00:07:58.815 Test: blockdev reset ...[2024-10-09 07:48:00.722148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:58.815 passed 00:07:58.815 Test: blockdev write read 8 blocks ...[2024-10-09 07:48:00.726127] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.815 passed 00:07:58.815 Test: blockdev write read size > 128k ...passed 00:07:58.815 Test: blockdev write read invalid size ...passed 00:07:58.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.815 Test: blockdev write read max offset ...passed 00:07:58.815 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.815 Test: blockdev writev readv 8 blocks ...passed 00:07:58.815 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.815 Test: blockdev writev readv block ...passed 00:07:58.815 Test: blockdev writev readv size > 128k ...passed 00:07:58.815 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.815 Test: blockdev comparev and writev ...[2024-10-09 07:48:00.733668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1e06000 len:0x1000 00:07:58.815 [2024-10-09 07:48:00.733734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.815 passed 00:07:58.815 Test: blockdev nvme passthru rw ...passed 00:07:58.815 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.815 Test: blockdev nvme admin passthru ...[2024-10-09 07:48:00.734507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.815 [2024-10-09 07:48:00.734558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.815 passed 00:07:58.815 Test: blockdev copy ...passed 00:07:58.815 Suite: bdevio tests on: Nvme2n3 00:07:58.815 Test: blockdev write read block ...passed 00:07:58.815 Test: blockdev write zeroes read block ...passed 00:07:58.815 Test: blockdev write zeroes read no split ...passed 00:07:58.815 Test: blockdev write zeroes read split ...passed 00:07:58.815 Test: blockdev write zeroes read split partial ...passed 00:07:58.815 Test: blockdev reset ...[2024-10-09 07:48:00.800933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:58.815 [2024-10-09 07:48:00.805571] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.815 passed 00:07:58.815 Test: blockdev write read 8 blocks ...passed 00:07:58.815 Test: blockdev write read size > 128k ...passed 00:07:58.815 Test: blockdev write read invalid size ...passed 00:07:58.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.815 Test: blockdev write read max offset ...passed 00:07:58.815 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.815 Test: blockdev writev readv 8 blocks ...passed 00:07:58.815 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.815 Test: blockdev writev readv block ...passed 00:07:58.815 Test: blockdev writev readv size > 128k ...passed 00:07:58.815 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.815 Test: blockdev comparev and writev ...[2024-10-09 07:48:00.813407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d203c000 len:0x1000 00:07:58.815 [2024-10-09 07:48:00.813469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.815 passed 00:07:58.815 Test: blockdev nvme passthru rw ...passed 00:07:58.815 Test: blockdev nvme passthru vendor specific ...[2024-10-09 07:48:00.814246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.815 [2024-10-09 07:48:00.814288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.815 passed 00:07:58.815 Test: blockdev nvme admin passthru ...passed 00:07:58.815 Test: blockdev copy ...passed 00:07:58.815 Suite: bdevio tests on: Nvme2n2 00:07:58.815 Test: blockdev write read block ...passed 00:07:58.815 Test: blockdev write zeroes read block ...passed 00:07:59.074 Test: blockdev write zeroes read no split ...passed 00:07:59.074 Test: blockdev write zeroes read split ...passed 00:07:59.074 Test: blockdev write zeroes read split partial ...passed 00:07:59.074 Test: blockdev reset ...[2024-10-09 07:48:00.880255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:59.074 passed 00:07:59.075 Test: blockdev write read 8 blocks ...[2024-10-09 07:48:00.884847] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.075 passed 00:07:59.075 Test: blockdev write read size > 128k ...passed 00:07:59.075 Test: blockdev write read invalid size ...passed 00:07:59.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.075 Test: blockdev write read max offset ...passed 00:07:59.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.075 Test: blockdev writev readv 8 blocks ...passed 00:07:59.075 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.075 Test: blockdev writev readv block ...passed 00:07:59.075 Test: blockdev writev readv size > 128k ...passed 00:07:59.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.075 Test: blockdev comparev and writev ...[2024-10-09 07:48:00.892357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2036000 len:0x1000 00:07:59.075 [2024-10-09 07:48:00.892419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.075 passed 00:07:59.075 Test: blockdev nvme passthru rw ...passed 00:07:59.075 Test: blockdev nvme passthru vendor specific ...passed 00:07:59.075 Test: blockdev nvme admin passthru ...[2024-10-09 07:48:00.893282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.075 [2024-10-09 07:48:00.893345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.075 passed 00:07:59.075 Test: blockdev copy ...passed 00:07:59.075 Suite: bdevio tests on: Nvme2n1 00:07:59.075 Test: blockdev write read block ...passed 00:07:59.075 Test: blockdev write zeroes read block ...passed 00:07:59.075 Test: blockdev write zeroes read no split ...passed 00:07:59.075 Test: blockdev write zeroes read split ...passed 00:07:59.075 Test: blockdev write zeroes read split partial ...passed 00:07:59.075 Test: blockdev reset ...[2024-10-09 07:48:00.961702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:59.075 [2024-10-09 07:48:00.966221] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.075 passed 00:07:59.075 Test: blockdev write read 8 blocks ...passed 00:07:59.075 Test: blockdev write read size > 128k ...passed 00:07:59.075 Test: blockdev write read invalid size ...passed 00:07:59.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.075 Test: blockdev write read max offset ...passed 00:07:59.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.075 Test: blockdev writev readv 8 blocks ...passed 00:07:59.075 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.075 Test: blockdev writev readv block ...passed 00:07:59.075 Test: blockdev writev readv size > 128k ...passed 00:07:59.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.075 Test: blockdev comparev and writev ...[2024-10-09 07:48:00.974206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2032000 len:0x1000 00:07:59.075 [2024-10-09 07:48:00.974271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.075 passed 00:07:59.075 Test: blockdev nvme passthru rw ...passed 00:07:59.075 Test: blockdev nvme passthru vendor specific ...[2024-10-09 07:48:00.975025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.075 [2024-10-09 07:48:00.975068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.075 passed 00:07:59.075 Test: blockdev nvme admin passthru ...passed 00:07:59.075 Test: blockdev copy ...passed 00:07:59.075 Suite: bdevio tests on: Nvme1n1p2 00:07:59.075 Test: blockdev write read block ...passed 00:07:59.075 Test: blockdev write zeroes read block ...passed 00:07:59.075 Test: blockdev write zeroes read no split ...passed 00:07:59.075 Test: blockdev write zeroes read split ...passed 00:07:59.075 Test: blockdev write zeroes read split partial ...passed 00:07:59.075 Test: blockdev reset ...[2024-10-09 07:48:01.043391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:59.075 passed 00:07:59.075 Test: blockdev write read 8 blocks ...[2024-10-09 07:48:01.047389] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.075 passed 00:07:59.075 Test: blockdev write read size > 128k ...passed 00:07:59.075 Test: blockdev write read invalid size ...passed 00:07:59.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.075 Test: blockdev write read max offset ...passed 00:07:59.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.075 Test: blockdev writev readv 8 blocks ...passed 00:07:59.075 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.075 Test: blockdev writev readv block ...passed 00:07:59.075 Test: blockdev writev readv size > 128k ...passed 00:07:59.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.075 Test: blockdev comparev and writev ...[2024-10-09 07:48:01.055233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d202e000 len:0x1000 00:07:59.075 [2024-10-09 07:48:01.055297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.075 passed 00:07:59.075 Test: blockdev nvme passthru rw ...passed 00:07:59.075 Test: blockdev nvme passthru vendor specific ...passed 00:07:59.075 Test: blockdev nvme admin passthru ...passed 00:07:59.075 Test: blockdev copy ...passed 00:07:59.075 Suite: bdevio tests on: Nvme1n1p1 00:07:59.075 Test: blockdev write read block ...passed 00:07:59.075 Test: blockdev write zeroes read block ...passed 00:07:59.075 Test: blockdev write zeroes read no split ...passed 00:07:59.386 Test: blockdev write zeroes read split ...passed 00:07:59.386 Test: blockdev write zeroes read split partial ...passed 00:07:59.386 Test: blockdev reset ...[2024-10-09 07:48:01.113297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:59.386 [2024-10-09 07:48:01.117244] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.386 passed 00:07:59.386 Test: blockdev write read 8 blocks ...passed 00:07:59.386 Test: blockdev write read size > 128k ...passed 00:07:59.387 Test: blockdev write read invalid size ...passed 00:07:59.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.387 Test: blockdev write read max offset ...passed 00:07:59.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.387 Test: blockdev writev readv 8 blocks ...passed 00:07:59.387 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.387 Test: blockdev writev readv block ...passed 00:07:59.387 Test: blockdev writev readv size > 128k ...passed 00:07:59.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.387 Test: blockdev comparev and writev ...[2024-10-09 07:48:01.125091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c780e000 len:0x1000 00:07:59.387 [2024-10-09 07:48:01.125153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.387 passed 00:07:59.387 Test: blockdev nvme passthru rw ...passed 00:07:59.387 Test: blockdev nvme passthru vendor specific ...passed 00:07:59.387 Test: blockdev nvme admin passthru ...passed 00:07:59.387 Test: blockdev copy ...passed 00:07:59.387 Suite: bdevio tests on: Nvme0n1 00:07:59.387 Test: blockdev write read block ...passed 00:07:59.387 Test: blockdev write zeroes read block ...passed 00:07:59.387 Test: blockdev write zeroes read no split ...passed 00:07:59.387 Test: blockdev write zeroes read split ...passed 00:07:59.387 Test: blockdev write zeroes read split partial ...passed 00:07:59.387 Test: blockdev reset ...[2024-10-09 07:48:01.181281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:59.387 [2024-10-09 07:48:01.185151] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.387 passed 00:07:59.387 Test: blockdev write read 8 blocks ...passed 00:07:59.387 Test: blockdev write read size > 128k ...passed 00:07:59.387 Test: blockdev write read invalid size ...passed 00:07:59.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.387 Test: blockdev write read max offset ...passed 00:07:59.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.387 Test: blockdev writev readv 8 blocks ...passed 00:07:59.387 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.387 Test: blockdev writev readv block ...passed 00:07:59.387 Test: blockdev writev readv size > 128k ...passed 00:07:59.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.387 Test: blockdev comparev and writev ...[2024-10-09 07:48:01.194024] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassed 00:07:59.387 Test: blockdev nvme passthru rw ...ince it has 00:07:59.387 separate metadata which is not supported yet. 00:07:59.387 passed 00:07:59.387 Test: blockdev nvme passthru vendor specific ...[2024-10-09 07:48:01.194874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:59.387 [2024-10-09 07:48:01.195101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:59.387 passed 00:07:59.387 Test: blockdev nvme admin passthru ...passed 00:07:59.387 Test: blockdev copy ...passed 00:07:59.387 00:07:59.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.387 suites 7 7 n/a 0 0 00:07:59.387 tests 161 161 161 0 0 00:07:59.387 asserts 1025 1025 1025 0 n/a 00:07:59.387 00:07:59.387 Elapsed time = 1.444 seconds 00:07:59.387 0 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63185 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 63185 ']' 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 63185 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63185 00:07:59.387 killing process with pid 63185 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63185' 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 63185 00:07:59.387 07:48:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 63185 00:08:00.321 07:48:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:00.321 00:08:00.321 real 0m2.874s 00:08:00.321 user 0m7.266s 00:08:00.321 sys 0m0.369s 00:08:00.321 07:48:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.321 07:48:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:00.321 ************************************ 00:08:00.321 END TEST bdev_bounds 00:08:00.321 ************************************ 00:08:00.321 07:48:02 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:00.321 07:48:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.321 07:48:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.321 07:48:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.321 ************************************ 00:08:00.321 START TEST bdev_nbd 00:08:00.321 ************************************ 00:08:00.321 07:48:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:00.321 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63249 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63249 /var/tmp/spdk-nbd.sock 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 63249 ']' 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.579 07:48:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:00.579 [2024-10-09 07:48:02.457907] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:00.579 [2024-10-09 07:48:02.458059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.837 [2024-10-09 07:48:02.625422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.095 [2024-10-09 07:48:02.865078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.661 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.920 1+0 records in 00:08:01.920 1+0 records out 00:08:01.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454405 s, 9.0 MB/s 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.920 07:48:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.178 1+0 records in 00:08:02.178 1+0 records out 00:08:02.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656339 s, 6.2 MB/s 00:08:02.178 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.436 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.436 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.436 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.436 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.436 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.436 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.437 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:02.694 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:02.694 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.695 1+0 records in 00:08:02.695 1+0 records out 00:08:02.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833796 s, 4.9 MB/s 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.695 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.953 1+0 records in 00:08:02.953 1+0 records out 00:08:02.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719104 s, 5.7 MB/s 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.953 07:48:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:03.212 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.470 1+0 records in 00:08:03.470 1+0 records out 00:08:03.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797155 s, 5.1 MB/s 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.470 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.728 1+0 records in 00:08:03.728 1+0 records out 00:08:03.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655445 s, 6.2 MB/s 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.728 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:03.986 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.987 1+0 records in 00:08:03.987 1+0 records out 00:08:03.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113783 s, 3.6 MB/s 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.987 07:48:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd0", 00:08:04.554 "bdev_name": "Nvme0n1" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd1", 00:08:04.554 "bdev_name": "Nvme1n1p1" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd2", 00:08:04.554 "bdev_name": "Nvme1n1p2" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd3", 00:08:04.554 "bdev_name": "Nvme2n1" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd4", 00:08:04.554 "bdev_name": "Nvme2n2" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd5", 00:08:04.554 "bdev_name": "Nvme2n3" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd6", 00:08:04.554 "bdev_name": "Nvme3n1" 00:08:04.554 } 00:08:04.554 ]' 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd0", 00:08:04.554 "bdev_name": "Nvme0n1" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd1", 00:08:04.554 "bdev_name": "Nvme1n1p1" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd2", 00:08:04.554 "bdev_name": "Nvme1n1p2" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd3", 00:08:04.554 "bdev_name": "Nvme2n1" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd4", 00:08:04.554 "bdev_name": "Nvme2n2" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd5", 00:08:04.554 "bdev_name": "Nvme2n3" 00:08:04.554 }, 00:08:04.554 { 00:08:04.554 "nbd_device": "/dev/nbd6", 00:08:04.554 "bdev_name": "Nvme3n1" 00:08:04.554 } 00:08:04.554 ]' 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.554 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.813 07:48:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.071 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.330 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.897 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.898 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.157 07:48:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:06.415 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:06.415 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:06.415 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:06.415 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.416 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.416 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:06.416 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.416 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.416 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.416 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.674 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.933 07:48:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:07.500 /dev/nbd0 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.500 1+0 records in 00:08:07.500 1+0 records out 00:08:07.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709209 s, 5.8 MB/s 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.500 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:07.758 /dev/nbd1 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.758 1+0 records in 00:08:07.758 1+0 records out 00:08:07.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570909 s, 7.2 MB/s 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.758 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:08.029 /dev/nbd10 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.029 1+0 records in 00:08:08.029 1+0 records out 00:08:08.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633171 s, 6.5 MB/s 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.029 07:48:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:08.288 /dev/nbd11 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.288 1+0 records in 00:08:08.288 1+0 records out 00:08:08.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648649 s, 6.3 MB/s 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.288 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:08.546 /dev/nbd12 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.546 1+0 records in 00:08:08.546 1+0 records out 00:08:08.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072793 s, 5.6 MB/s 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:08.546 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.805 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.805 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:08.805 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.805 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.805 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:09.063 /dev/nbd13 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.063 1+0 records in 00:08:09.063 1+0 records out 00:08:09.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598502 s, 6.8 MB/s 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.063 07:48:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:09.321 /dev/nbd14 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.321 1+0 records in 00:08:09.321 1+0 records out 00:08:09.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841883 s, 4.9 MB/s 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.321 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.580 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd0", 00:08:09.580 "bdev_name": "Nvme0n1" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd1", 00:08:09.580 "bdev_name": "Nvme1n1p1" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd10", 00:08:09.580 "bdev_name": "Nvme1n1p2" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd11", 00:08:09.580 "bdev_name": "Nvme2n1" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd12", 00:08:09.580 "bdev_name": "Nvme2n2" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd13", 00:08:09.580 "bdev_name": "Nvme2n3" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd14", 00:08:09.580 "bdev_name": "Nvme3n1" 00:08:09.580 } 00:08:09.580 ]' 00:08:09.580 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.580 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd0", 00:08:09.580 "bdev_name": "Nvme0n1" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd1", 00:08:09.580 "bdev_name": "Nvme1n1p1" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd10", 00:08:09.580 "bdev_name": "Nvme1n1p2" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd11", 00:08:09.580 "bdev_name": "Nvme2n1" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd12", 00:08:09.580 "bdev_name": "Nvme2n2" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd13", 00:08:09.580 "bdev_name": "Nvme2n3" 00:08:09.580 }, 00:08:09.580 { 00:08:09.580 "nbd_device": "/dev/nbd14", 00:08:09.580 "bdev_name": "Nvme3n1" 00:08:09.580 } 00:08:09.580 ]' 00:08:09.580 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.580 /dev/nbd1 00:08:09.580 /dev/nbd10 00:08:09.580 /dev/nbd11 00:08:09.580 /dev/nbd12 00:08:09.580 /dev/nbd13 00:08:09.580 /dev/nbd14' 00:08:09.580 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.581 /dev/nbd1 00:08:09.581 /dev/nbd10 00:08:09.581 /dev/nbd11 00:08:09.581 /dev/nbd12 00:08:09.581 /dev/nbd13 00:08:09.581 /dev/nbd14' 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.581 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:09.839 256+0 records in 00:08:09.839 256+0 records out 00:08:09.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0093307 s, 112 MB/s 00:08:09.839 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.839 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.839 256+0 records in 00:08:09.839 256+0 records out 00:08:09.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158031 s, 6.6 MB/s 00:08:09.839 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.839 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.097 256+0 records in 00:08:10.097 256+0 records out 00:08:10.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139218 s, 7.5 MB/s 00:08:10.097 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.097 07:48:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:10.097 256+0 records in 00:08:10.097 256+0 records out 00:08:10.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13147 s, 8.0 MB/s 00:08:10.097 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.097 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:10.355 256+0 records in 00:08:10.355 256+0 records out 00:08:10.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165713 s, 6.3 MB/s 00:08:10.355 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.355 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:10.355 256+0 records in 00:08:10.355 256+0 records out 00:08:10.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138764 s, 7.6 MB/s 00:08:10.355 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.355 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:10.613 256+0 records in 00:08:10.613 256+0 records out 00:08:10.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155275 s, 6.8 MB/s 00:08:10.613 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.614 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:10.872 256+0 records in 00:08:10.872 256+0 records out 00:08:10.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153231 s, 6.8 MB/s 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.872 07:48:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.130 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:11.388 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.646 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.926 07:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.184 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.750 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.008 07:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.265 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:13.833 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:14.100 malloc_lvol_verify 00:08:14.100 07:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.358 4adfeca1-a1a4-4ea7-b3c1-86871cf2758b 00:08:14.358 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:14.616 f91c13f2-dfa4-42da-b8ff-008c4f5bdde3 00:08:14.616 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:14.874 /dev/nbd0 00:08:14.874 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:14.874 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:14.874 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:14.874 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:14.874 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:14.874 mke2fs 1.47.0 (5-Feb-2023) 00:08:14.874 Discarding device blocks: 0/4096 done 00:08:14.874 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:14.874 00:08:14.874 Allocating group tables: 0/1 done 00:08:14.874 Writing inode tables: 0/1 done 00:08:14.874 Creating journal (1024 blocks): done 00:08:14.874 Writing superblocks and filesystem accounting information: 0/1 done 00:08:14.874 00:08:14.874 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:14.875 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.875 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:14.875 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:14.875 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:14.875 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.875 07:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.133 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.133 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.133 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.133 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.133 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.133 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63249 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 63249 ']' 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 63249 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63249 00:08:15.403 killing process with pid 63249 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63249' 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 63249 00:08:15.403 07:48:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 63249 00:08:16.803 07:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:16.803 00:08:16.803 real 0m16.119s 00:08:16.803 user 0m23.412s 00:08:16.803 sys 0m4.932s 00:08:16.803 07:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.803 ************************************ 00:08:16.803 END TEST bdev_nbd 00:08:16.803 ************************************ 00:08:16.803 07:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:16.803 07:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:16.803 07:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:16.803 07:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:16.804 skipping fio tests on NVMe due to multi-ns failures. 00:08:16.804 07:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:16.804 07:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:16.804 07:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.804 07:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:16.804 07:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.804 07:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.804 ************************************ 00:08:16.804 START TEST bdev_verify 00:08:16.804 ************************************ 00:08:16.804 07:48:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.804 [2024-10-09 07:48:18.621773] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:16.804 [2024-10-09 07:48:18.621994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:08:16.804 [2024-10-09 07:48:18.807141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.062 [2024-10-09 07:48:19.057469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.062 [2024-10-09 07:48:19.057483] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.995 Running I/O for 5 seconds... 00:08:20.329 17920.00 IOPS, 70.00 MiB/s [2024-10-09T07:48:23.276Z] 18208.00 IOPS, 71.12 MiB/s [2024-10-09T07:48:24.209Z] 18090.67 IOPS, 70.67 MiB/s [2024-10-09T07:48:25.144Z] 18192.00 IOPS, 71.06 MiB/s [2024-10-09T07:48:25.144Z] 18150.40 IOPS, 70.90 MiB/s 00:08:23.132 Latency(us) 00:08:23.132 [2024-10-09T07:48:25.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.132 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x0 length 0xbd0bd 00:08:23.132 Nvme0n1 : 5.08 1298.88 5.07 0.00 0.00 97979.60 13583.83 97708.22 00:08:23.132 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:23.132 Nvme0n1 : 5.10 1255.18 4.90 0.00 0.00 101750.25 17515.99 96278.34 00:08:23.132 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x0 length 0x4ff80 00:08:23.132 Nvme1n1p1 : 5.10 1306.10 5.10 0.00 0.00 97601.04 17039.36 90558.84 00:08:23.132 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:23.132 Nvme1n1p1 : 5.10 1254.67 4.90 0.00 0.00 101601.35 17515.99 89605.59 00:08:23.132 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x0 length 0x4ff7f 00:08:23.132 Nvme1n1p2 : 5.10 1305.21 5.10 0.00 0.00 97437.82 19184.17 84839.33 00:08:23.132 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:23.132 Nvme1n1p2 : 5.11 1253.62 4.90 0.00 0.00 101474.92 19184.17 86269.21 00:08:23.132 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x0 length 0x80000 00:08:23.132 Nvme2n1 : 5.10 1304.71 5.10 0.00 0.00 97274.99 19541.64 82456.20 00:08:23.132 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x80000 length 0x80000 00:08:23.132 Nvme2n1 : 5.11 1252.81 4.89 0.00 0.00 101336.59 20971.52 83409.45 00:08:23.132 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x0 length 0x80000 00:08:23.132 Nvme2n2 : 5.10 1304.16 5.09 0.00 0.00 97109.24 19303.33 79119.83 00:08:23.132 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x80000 length 0x80000 00:08:23.132 Nvme2n2 : 5.11 1252.04 4.89 0.00 0.00 101206.47 22163.08 78166.57 00:08:23.132 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x0 length 0x80000 00:08:23.132 Nvme2n3 : 5.11 1303.72 5.09 0.00 0.00 96949.29 18945.86 80549.70 00:08:23.132 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x80000 length 0x80000 00:08:23.132 Nvme2n3 : 5.11 1251.67 4.89 0.00 0.00 101042.16 21448.15 80073.08 00:08:23.132 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x0 length 0x20000 00:08:23.132 Nvme3n1 : 5.11 1302.89 5.09 0.00 0.00 96814.66 20018.27 84362.71 00:08:23.132 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.132 Verification LBA range: start 0x20000 length 0x20000 00:08:23.132 Nvme3n1 : 5.11 1251.31 4.89 0.00 0.00 100880.52 20494.89 82456.20 00:08:23.132 [2024-10-09T07:48:25.144Z] =================================================================================================================== 00:08:23.132 [2024-10-09T07:48:25.144Z] Total : 17896.97 69.91 0.00 0.00 99280.01 13583.83 97708.22 00:08:24.503 00:08:24.503 real 0m7.949s 00:08:24.503 user 0m14.276s 00:08:24.503 sys 0m0.319s 00:08:24.503 07:48:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.503 07:48:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:24.503 ************************************ 00:08:24.503 END TEST bdev_verify 00:08:24.503 ************************************ 00:08:24.503 07:48:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.503 07:48:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:24.503 07:48:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.503 07:48:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.503 ************************************ 00:08:24.503 START TEST bdev_verify_big_io 00:08:24.503 ************************************ 00:08:24.503 07:48:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.761 [2024-10-09 07:48:26.595974] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:24.761 [2024-10-09 07:48:26.596137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63815 ] 00:08:25.019 [2024-10-09 07:48:26.772538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.277 [2024-10-09 07:48:27.059909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.277 [2024-10-09 07:48:27.059919] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.213 Running I/O for 5 seconds... 00:08:32.026 744.00 IOPS, 46.50 MiB/s [2024-10-09T07:48:34.296Z] 2483.00 IOPS, 155.19 MiB/s [2024-10-09T07:48:34.296Z] 3018.33 IOPS, 188.65 MiB/s 00:08:32.284 Latency(us) 00:08:32.284 [2024-10-09T07:48:34.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.284 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x0 length 0xbd0b 00:08:32.284 Nvme0n1 : 5.74 112.90 7.06 0.00 0.00 1085116.18 45756.04 1121023.07 00:08:32.284 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:32.284 Nvme0n1 : 5.96 99.28 6.21 0.00 0.00 1241091.02 22758.87 1769233.69 00:08:32.284 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x0 length 0x4ff8 00:08:32.284 Nvme1n1p1 : 5.82 100.58 6.29 0.00 0.00 1200050.42 72447.07 1822615.74 00:08:32.284 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:32.284 Nvme1n1p1 : 5.97 104.21 6.51 0.00 0.00 1140996.13 42419.67 1182031.13 00:08:32.284 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x0 length 0x4ff7 00:08:32.284 Nvme1n1p2 : 5.98 77.59 4.85 0.00 0.00 1505387.60 147753.89 1868371.78 00:08:32.284 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:32.284 Nvme1n1p2 : 6.01 98.89 6.18 0.00 0.00 1162179.76 86269.21 1837867.75 00:08:32.284 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x0 length 0x8000 00:08:32.284 Nvme2n1 : 5.91 121.82 7.61 0.00 0.00 939619.42 109623.85 957063.91 00:08:32.284 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x8000 length 0x8000 00:08:32.284 Nvme2n1 : 6.03 103.69 6.48 0.00 0.00 1088329.42 34078.72 1853119.77 00:08:32.284 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x0 length 0x8000 00:08:32.284 Nvme2n2 : 5.87 125.32 7.83 0.00 0.00 891875.02 50283.99 991380.95 00:08:32.284 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x8000 length 0x8000 00:08:32.284 Nvme2n2 : 6.03 107.63 6.73 0.00 0.00 1022586.23 18111.77 1898875.81 00:08:32.284 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x0 length 0x8000 00:08:32.284 Nvme2n3 : 5.95 134.42 8.40 0.00 0.00 809600.47 37176.79 1029510.98 00:08:32.284 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x8000 length 0x8000 00:08:32.284 Nvme2n3 : 6.05 113.19 7.07 0.00 0.00 938402.46 18350.08 1944631.85 00:08:32.284 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x0 length 0x2000 00:08:32.284 Nvme3n1 : 5.99 149.61 9.35 0.00 0.00 709943.73 2934.23 1067641.02 00:08:32.284 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:32.284 Verification LBA range: start 0x2000 length 0x2000 00:08:32.284 Nvme3n1 : 6.12 145.05 9.07 0.00 0.00 721003.12 901.12 1082893.03 00:08:32.284 [2024-10-09T07:48:34.296Z] =================================================================================================================== 00:08:32.284 [2024-10-09T07:48:34.296Z] Total : 1594.19 99.64 0.00 0.00 997695.88 901.12 1944631.85 00:08:34.812 00:08:34.812 real 0m9.953s 00:08:34.812 user 0m17.955s 00:08:34.812 sys 0m0.382s 00:08:34.812 07:48:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.812 ************************************ 00:08:34.812 07:48:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:34.812 END TEST bdev_verify_big_io 00:08:34.812 ************************************ 00:08:34.812 07:48:36 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.812 07:48:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:34.812 07:48:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.812 07:48:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:34.812 ************************************ 00:08:34.812 START TEST bdev_write_zeroes 00:08:34.812 ************************************ 00:08:34.812 07:48:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.812 [2024-10-09 07:48:36.621150] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:34.812 [2024-10-09 07:48:36.621425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63941 ] 00:08:34.812 [2024-10-09 07:48:36.806583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.393 [2024-10-09 07:48:37.124138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.958 Running I/O for 1 seconds... 00:08:37.152 18803.00 IOPS, 73.45 MiB/s 00:08:37.152 Latency(us) 00:08:37.152 [2024-10-09T07:48:39.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.152 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.152 Nvme0n1 : 1.07 2668.10 10.42 0.00 0.00 47834.78 11975.21 150613.64 00:08:37.152 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.152 Nvme1n1p1 : 1.08 2675.55 10.45 0.00 0.00 47603.92 12153.95 154426.65 00:08:37.152 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.152 Nvme1n1p2 : 1.08 2671.16 10.43 0.00 0.00 47553.40 13226.36 154426.65 00:08:37.152 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.152 Nvme2n1 : 1.08 2667.13 10.42 0.00 0.00 47440.84 11141.12 122016.12 00:08:37.152 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.152 Nvme2n2 : 1.08 2663.11 10.40 0.00 0.00 47419.56 11260.28 136314.88 00:08:37.152 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.152 Nvme2n3 : 1.08 2659.05 10.39 0.00 0.00 47397.92 11141.12 141081.13 00:08:37.152 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.152 Nvme3n1 : 1.08 2654.95 10.37 0.00 0.00 47386.36 11736.90 143940.89 00:08:37.152 [2024-10-09T07:48:39.164Z] =================================================================================================================== 00:08:37.152 [2024-10-09T07:48:39.164Z] Total : 18659.04 72.89 0.00 0.00 47519.34 11141.12 154426.65 00:08:38.524 00:08:38.524 real 0m3.893s 00:08:38.524 user 0m3.431s 00:08:38.524 sys 0m0.297s 00:08:38.524 07:48:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.524 07:48:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:38.524 ************************************ 00:08:38.524 END TEST bdev_write_zeroes 00:08:38.524 ************************************ 00:08:38.524 07:48:40 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.524 07:48:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:38.525 07:48:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.525 07:48:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.525 ************************************ 00:08:38.525 START TEST bdev_json_nonenclosed 00:08:38.525 ************************************ 00:08:38.525 07:48:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.782 [2024-10-09 07:48:40.549911] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:38.782 [2024-10-09 07:48:40.550123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64000 ] 00:08:38.782 [2024-10-09 07:48:40.734099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.040 [2024-10-09 07:48:40.928907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.040 [2024-10-09 07:48:40.929039] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:39.040 [2024-10-09 07:48:40.929069] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:39.040 [2024-10-09 07:48:40.929085] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.606 00:08:39.606 real 0m0.950s 00:08:39.606 user 0m0.687s 00:08:39.606 sys 0m0.154s 00:08:39.606 07:48:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.606 07:48:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:39.606 ************************************ 00:08:39.607 END TEST bdev_json_nonenclosed 00:08:39.607 ************************************ 00:08:39.607 07:48:41 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.607 07:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:39.607 07:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.607 07:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.607 ************************************ 00:08:39.607 START TEST bdev_json_nonarray 00:08:39.607 ************************************ 00:08:39.607 07:48:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.607 [2024-10-09 07:48:41.546143] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:39.607 [2024-10-09 07:48:41.546400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64031 ] 00:08:39.865 [2024-10-09 07:48:41.741014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.124 [2024-10-09 07:48:41.970721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.124 [2024-10-09 07:48:41.970855] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:40.124 [2024-10-09 07:48:41.970885] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:40.124 [2024-10-09 07:48:41.970899] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.690 00:08:40.690 real 0m1.084s 00:08:40.690 user 0m0.818s 00:08:40.690 sys 0m0.154s 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:40.690 ************************************ 00:08:40.690 END TEST bdev_json_nonarray 00:08:40.690 ************************************ 00:08:40.690 07:48:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:40.690 07:48:42 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:40.690 07:48:42 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:40.690 07:48:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.690 07:48:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.690 07:48:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.690 ************************************ 00:08:40.690 START TEST bdev_gpt_uuid 00:08:40.690 ************************************ 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64062 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64062 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 64062 ']' 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.690 07:48:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.690 [2024-10-09 07:48:42.696999] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:40.690 [2024-10-09 07:48:42.697214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64062 ] 00:08:40.948 [2024-10-09 07:48:42.875763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.206 [2024-10-09 07:48:43.067185] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.138 07:48:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.138 07:48:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:42.138 07:48:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.138 07:48:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.138 07:48:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.396 Some configs were skipped because the RPC state that can call them passed over. 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.396 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:42.396 { 00:08:42.396 "name": "Nvme1n1p1", 00:08:42.397 "aliases": [ 00:08:42.397 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:42.397 ], 00:08:42.397 "product_name": "GPT Disk", 00:08:42.397 "block_size": 4096, 00:08:42.397 "num_blocks": 655104, 00:08:42.397 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:42.397 "assigned_rate_limits": { 00:08:42.397 "rw_ios_per_sec": 0, 00:08:42.397 "rw_mbytes_per_sec": 0, 00:08:42.397 "r_mbytes_per_sec": 0, 00:08:42.397 "w_mbytes_per_sec": 0 00:08:42.397 }, 00:08:42.397 "claimed": false, 00:08:42.397 "zoned": false, 00:08:42.397 "supported_io_types": { 00:08:42.397 "read": true, 00:08:42.397 "write": true, 00:08:42.397 "unmap": true, 00:08:42.397 "flush": true, 00:08:42.397 "reset": true, 00:08:42.397 "nvme_admin": false, 00:08:42.397 "nvme_io": false, 00:08:42.397 "nvme_io_md": false, 00:08:42.397 "write_zeroes": true, 00:08:42.397 "zcopy": false, 00:08:42.397 "get_zone_info": false, 00:08:42.397 "zone_management": false, 00:08:42.397 "zone_append": false, 00:08:42.397 "compare": true, 00:08:42.397 "compare_and_write": false, 00:08:42.397 "abort": true, 00:08:42.397 "seek_hole": false, 00:08:42.397 "seek_data": false, 00:08:42.397 "copy": true, 00:08:42.397 "nvme_iov_md": false 00:08:42.397 }, 00:08:42.397 "driver_specific": { 00:08:42.397 "gpt": { 00:08:42.397 "base_bdev": "Nvme1n1", 00:08:42.397 "offset_blocks": 256, 00:08:42.397 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:42.397 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:42.397 "partition_name": "SPDK_TEST_first" 00:08:42.397 } 00:08:42.397 } 00:08:42.397 } 00:08:42.397 ]' 00:08:42.397 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:42.397 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:42.397 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:42.654 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:42.654 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:42.654 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:42.655 { 00:08:42.655 "name": "Nvme1n1p2", 00:08:42.655 "aliases": [ 00:08:42.655 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:42.655 ], 00:08:42.655 "product_name": "GPT Disk", 00:08:42.655 "block_size": 4096, 00:08:42.655 "num_blocks": 655103, 00:08:42.655 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:42.655 "assigned_rate_limits": { 00:08:42.655 "rw_ios_per_sec": 0, 00:08:42.655 "rw_mbytes_per_sec": 0, 00:08:42.655 "r_mbytes_per_sec": 0, 00:08:42.655 "w_mbytes_per_sec": 0 00:08:42.655 }, 00:08:42.655 "claimed": false, 00:08:42.655 "zoned": false, 00:08:42.655 "supported_io_types": { 00:08:42.655 "read": true, 00:08:42.655 "write": true, 00:08:42.655 "unmap": true, 00:08:42.655 "flush": true, 00:08:42.655 "reset": true, 00:08:42.655 "nvme_admin": false, 00:08:42.655 "nvme_io": false, 00:08:42.655 "nvme_io_md": false, 00:08:42.655 "write_zeroes": true, 00:08:42.655 "zcopy": false, 00:08:42.655 "get_zone_info": false, 00:08:42.655 "zone_management": false, 00:08:42.655 "zone_append": false, 00:08:42.655 "compare": true, 00:08:42.655 "compare_and_write": false, 00:08:42.655 "abort": true, 00:08:42.655 "seek_hole": false, 00:08:42.655 "seek_data": false, 00:08:42.655 "copy": true, 00:08:42.655 "nvme_iov_md": false 00:08:42.655 }, 00:08:42.655 "driver_specific": { 00:08:42.655 "gpt": { 00:08:42.655 "base_bdev": "Nvme1n1", 00:08:42.655 "offset_blocks": 655360, 00:08:42.655 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:42.655 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:42.655 "partition_name": "SPDK_TEST_second" 00:08:42.655 } 00:08:42.655 } 00:08:42.655 } 00:08:42.655 ]' 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 64062 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 64062 ']' 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 64062 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.655 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64062 00:08:42.913 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.913 killing process with pid 64062 00:08:42.913 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.913 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64062' 00:08:42.913 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 64062 00:08:42.913 07:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 64062 00:08:45.443 00:08:45.443 real 0m4.415s 00:08:45.443 user 0m4.912s 00:08:45.443 sys 0m0.509s 00:08:45.443 07:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.443 07:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:45.443 ************************************ 00:08:45.443 END TEST bdev_gpt_uuid 00:08:45.443 ************************************ 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:45.443 07:48:46 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:45.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:45.443 Waiting for block devices as requested 00:08:45.701 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.701 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.701 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.959 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.270 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:51.270 07:48:52 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:51.270 07:48:52 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:51.270 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:51.270 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:51.270 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:51.270 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:51.270 07:48:53 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:51.270 00:08:51.270 real 1m9.641s 00:08:51.270 user 1m29.717s 00:08:51.270 sys 0m10.286s 00:08:51.270 07:48:53 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.270 07:48:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:51.270 ************************************ 00:08:51.270 END TEST blockdev_nvme_gpt 00:08:51.270 ************************************ 00:08:51.270 07:48:53 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:51.270 07:48:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.270 07:48:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.270 07:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:51.270 ************************************ 00:08:51.270 START TEST nvme 00:08:51.270 ************************************ 00:08:51.270 07:48:53 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:51.270 * Looking for test storage... 00:08:51.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:51.270 07:48:53 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:51.270 07:48:53 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:51.270 07:48:53 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:51.529 07:48:53 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.529 07:48:53 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.529 07:48:53 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.529 07:48:53 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.529 07:48:53 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.529 07:48:53 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.529 07:48:53 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.529 07:48:53 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.529 07:48:53 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:51.529 07:48:53 nvme -- scripts/common.sh@345 -- # : 1 00:08:51.529 07:48:53 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.529 07:48:53 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.529 07:48:53 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:51.529 07:48:53 nvme -- scripts/common.sh@353 -- # local d=1 00:08:51.529 07:48:53 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.529 07:48:53 nvme -- scripts/common.sh@355 -- # echo 1 00:08:51.529 07:48:53 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.529 07:48:53 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@353 -- # local d=2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.529 07:48:53 nvme -- scripts/common.sh@355 -- # echo 2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.529 07:48:53 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.529 07:48:53 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.529 07:48:53 nvme -- scripts/common.sh@368 -- # return 0 00:08:51.529 07:48:53 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.529 07:48:53 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:51.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.529 --rc genhtml_branch_coverage=1 00:08:51.529 --rc genhtml_function_coverage=1 00:08:51.529 --rc genhtml_legend=1 00:08:51.529 --rc geninfo_all_blocks=1 00:08:51.529 --rc geninfo_unexecuted_blocks=1 00:08:51.529 00:08:51.529 ' 00:08:51.530 07:48:53 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:51.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.530 --rc genhtml_branch_coverage=1 00:08:51.530 --rc genhtml_function_coverage=1 00:08:51.530 --rc genhtml_legend=1 00:08:51.530 --rc geninfo_all_blocks=1 00:08:51.530 --rc geninfo_unexecuted_blocks=1 00:08:51.530 00:08:51.530 ' 00:08:51.530 07:48:53 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:51.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.530 --rc genhtml_branch_coverage=1 00:08:51.530 --rc genhtml_function_coverage=1 00:08:51.530 --rc genhtml_legend=1 00:08:51.530 --rc geninfo_all_blocks=1 00:08:51.530 --rc geninfo_unexecuted_blocks=1 00:08:51.530 00:08:51.530 ' 00:08:51.530 07:48:53 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:51.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.530 --rc genhtml_branch_coverage=1 00:08:51.530 --rc genhtml_function_coverage=1 00:08:51.530 --rc genhtml_legend=1 00:08:51.530 --rc geninfo_all_blocks=1 00:08:51.530 --rc geninfo_unexecuted_blocks=1 00:08:51.530 00:08:51.530 ' 00:08:51.530 07:48:53 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:51.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.722 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.722 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.722 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.722 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.722 07:48:54 nvme -- nvme/nvme.sh@79 -- # uname 00:08:52.722 07:48:54 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:52.722 07:48:54 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:52.722 07:48:54 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1071 -- # stubpid=64715 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:52.722 Waiting for stub to ready for secondary processes... 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64715 ]] 00:08:52.722 07:48:54 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:52.722 [2024-10-09 07:48:54.612145] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:08:52.722 [2024-10-09 07:48:54.612400] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:53.655 [2024-10-09 07:48:55.434424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.655 07:48:55 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:53.655 07:48:55 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64715 ]] 00:08:53.655 07:48:55 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:53.655 [2024-10-09 07:48:55.651713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.655 [2024-10-09 07:48:55.651866] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.655 [2024-10-09 07:48:55.651882] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.913 [2024-10-09 07:48:55.674658] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:53.913 [2024-10-09 07:48:55.674753] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:53.913 [2024-10-09 07:48:55.684294] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:53.913 [2024-10-09 07:48:55.684511] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:53.913 [2024-10-09 07:48:55.687371] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:53.913 [2024-10-09 07:48:55.687660] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:53.913 [2024-10-09 07:48:55.687772] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:53.913 [2024-10-09 07:48:55.690705] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:53.913 [2024-10-09 07:48:55.690978] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:53.913 [2024-10-09 07:48:55.691094] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:53.913 [2024-10-09 07:48:55.694135] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:53.913 [2024-10-09 07:48:55.694421] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:53.913 [2024-10-09 07:48:55.694566] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:53.913 [2024-10-09 07:48:55.694671] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:53.913 [2024-10-09 07:48:55.694775] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:54.847 07:48:56 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:54.847 done. 00:08:54.847 07:48:56 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:54.847 07:48:56 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:54.847 07:48:56 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:54.847 07:48:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.847 07:48:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.847 ************************************ 00:08:54.847 START TEST nvme_reset 00:08:54.847 ************************************ 00:08:54.847 07:48:56 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:55.105 Initializing NVMe Controllers 00:08:55.105 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:55.105 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:55.105 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:55.105 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:55.105 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:55.105 00:08:55.105 real 0m0.346s 00:08:55.105 user 0m0.137s 00:08:55.105 sys 0m0.161s 00:08:55.105 07:48:56 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.105 07:48:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:55.105 ************************************ 00:08:55.105 END TEST nvme_reset 00:08:55.105 ************************************ 00:08:55.105 07:48:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:55.105 07:48:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.105 07:48:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.105 07:48:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.105 ************************************ 00:08:55.105 START TEST nvme_identify 00:08:55.106 ************************************ 00:08:55.106 07:48:56 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:55.106 07:48:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:55.106 07:48:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:55.106 07:48:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:55.106 07:48:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:55.106 07:48:56 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:55.106 07:48:56 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:55.106 07:48:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.106 07:48:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.106 07:48:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:55.106 07:48:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:55.106 07:48:57 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:55.106 07:48:57 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:55.367 [2024-10-09 07:48:57.353435] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 64748 terminated unexpected 00:08:55.367 ===================================================== 00:08:55.367 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.367 ===================================================== 00:08:55.367 Controller Capabilities/Features 00:08:55.367 ================================ 00:08:55.367 Vendor ID: 1b36 00:08:55.367 Subsystem Vendor ID: 1af4 00:08:55.367 Serial Number: 12340 00:08:55.367 Model Number: QEMU NVMe Ctrl 00:08:55.367 Firmware Version: 8.0.0 00:08:55.367 Recommended Arb Burst: 6 00:08:55.367 IEEE OUI Identifier: 00 54 52 00:08:55.367 Multi-path I/O 00:08:55.367 May have multiple subsystem ports: No 00:08:55.367 May have multiple controllers: No 00:08:55.367 Associated with SR-IOV VF: No 00:08:55.367 Max Data Transfer Size: 524288 00:08:55.367 Max Number of Namespaces: 256 00:08:55.367 Max Number of I/O Queues: 64 00:08:55.367 NVMe Specification Version (VS): 1.4 00:08:55.367 NVMe Specification Version (Identify): 1.4 00:08:55.367 Maximum Queue Entries: 2048 00:08:55.367 Contiguous Queues Required: Yes 00:08:55.367 Arbitration Mechanisms Supported 00:08:55.367 Weighted Round Robin: Not Supported 00:08:55.367 Vendor Specific: Not Supported 00:08:55.367 Reset Timeout: 7500 ms 00:08:55.367 Doorbell Stride: 4 bytes 00:08:55.367 NVM Subsystem Reset: Not Supported 00:08:55.367 Command Sets Supported 00:08:55.367 NVM Command Set: Supported 00:08:55.367 Boot Partition: Not Supported 00:08:55.367 Memory Page Size Minimum: 4096 bytes 00:08:55.367 Memory Page Size Maximum: 65536 bytes 00:08:55.367 Persistent Memory Region: Not Supported 00:08:55.367 Optional Asynchronous Events Supported 00:08:55.367 Namespace Attribute Notices: Supported 00:08:55.367 Firmware Activation Notices: Not Supported 00:08:55.367 ANA Change Notices: Not Supported 00:08:55.367 PLE Aggregate Log Change Notices: Not Supported 00:08:55.367 LBA Status Info Alert Notices: Not Supported 00:08:55.367 EGE Aggregate Log Change Notices: Not Supported 00:08:55.367 Normal NVM Subsystem Shutdown event: Not Supported 00:08:55.367 Zone Descriptor Change Notices: Not Supported 00:08:55.367 Discovery Log Change Notices: Not Supported 00:08:55.367 Controller Attributes 00:08:55.367 128-bit Host Identifier: Not Supported 00:08:55.367 Non-Operational Permissive Mode: Not Supported 00:08:55.367 NVM Sets: Not Supported 00:08:55.367 Read Recovery Levels: Not Supported 00:08:55.367 Endurance Groups: Not Supported 00:08:55.367 Predictable Latency Mode: Not Supported 00:08:55.367 Traffic Based Keep ALive: Not Supported 00:08:55.367 Namespace Granularity: Not Supported 00:08:55.367 SQ Associations: Not Supported 00:08:55.367 UUID List: Not Supported 00:08:55.367 Multi-Domain Subsystem: Not Supported 00:08:55.367 Fixed Capacity Management: Not Supported 00:08:55.367 Variable Capacity Management: Not Supported 00:08:55.367 Delete Endurance Group: Not Supported 00:08:55.367 Delete NVM Set: Not Supported 00:08:55.367 Extended LBA Formats Supported: Supported 00:08:55.367 Flexible Data Placement Supported: Not Supported 00:08:55.367 00:08:55.367 Controller Memory Buffer Support 00:08:55.367 ================================ 00:08:55.367 Supported: No 00:08:55.367 00:08:55.367 Persistent Memory Region Support 00:08:55.367 ================================ 00:08:55.367 Supported: No 00:08:55.367 00:08:55.367 Admin Command Set Attributes 00:08:55.367 ============================ 00:08:55.367 Security Send/Receive: Not Supported 00:08:55.367 Format NVM: Supported 00:08:55.367 Firmware Activate/Download: Not Supported 00:08:55.367 Namespace Management: Supported 00:08:55.367 Device Self-Test: Not Supported 00:08:55.367 Directives: Supported 00:08:55.367 NVMe-MI: Not Supported 00:08:55.367 Virtualization Management: Not Supported 00:08:55.367 Doorbell Buffer Config: Supported 00:08:55.367 Get LBA Status Capability: Not Supported 00:08:55.367 Command & Feature Lockdown Capability: Not Supported 00:08:55.367 Abort Command Limit: 4 00:08:55.367 Async Event Request Limit: 4 00:08:55.367 Number of Firmware Slots: N/A 00:08:55.367 Firmware Slot 1 Read-Only: N/A 00:08:55.367 Firmware Activation Without Reset: N/A 00:08:55.367 Multiple Update Detection Support: N/A 00:08:55.367 Firmware Update Granularity: No Information Provided 00:08:55.367 Per-Namespace SMART Log: Yes 00:08:55.367 Asymmetric Namespace Access Log Page: Not Supported 00:08:55.367 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:55.367 Command Effects Log Page: Supported 00:08:55.367 Get Log Page Extended Data: Supported 00:08:55.367 Telemetry Log Pages: Not Supported 00:08:55.367 Persistent Event Log Pages: Not Supported 00:08:55.367 Supported Log Pages Log Page: May Support 00:08:55.367 Commands Supported & Effects Log Page: Not Supported 00:08:55.367 Feature Identifiers & Effects Log Page:May Support 00:08:55.367 NVMe-MI Commands & Effects Log Page: May Support 00:08:55.367 Data Area 4 for Telemetry Log: Not Supported 00:08:55.367 Error Log Page Entries Supported: 1 00:08:55.367 Keep Alive: Not Supported 00:08:55.367 00:08:55.367 NVM Command Set Attributes 00:08:55.367 ========================== 00:08:55.367 Submission Queue Entry Size 00:08:55.367 Max: 64 00:08:55.367 Min: 64 00:08:55.367 Completion Queue Entry Size 00:08:55.367 Max: 16 00:08:55.367 Min: 16 00:08:55.367 Number of Namespaces: 256 00:08:55.367 Compare Command: Supported 00:08:55.367 Write Uncorrectable Command: Not Supported 00:08:55.367 Dataset Management Command: Supported 00:08:55.367 Write Zeroes Command: Supported 00:08:55.367 Set Features Save Field: Supported 00:08:55.367 Reservations: Not Supported 00:08:55.367 Timestamp: Supported 00:08:55.367 Copy: Supported 00:08:55.367 Volatile Write Cache: Present 00:08:55.367 Atomic Write Unit (Normal): 1 00:08:55.367 Atomic Write Unit (PFail): 1 00:08:55.367 Atomic Compare & Write Unit: 1 00:08:55.367 Fused Compare & Write: Not Supported 00:08:55.367 Scatter-Gather List 00:08:55.367 SGL Command Set: Supported 00:08:55.368 SGL Keyed: Not Supported 00:08:55.368 SGL Bit Bucket Descriptor: Not Supported 00:08:55.368 SGL Metadata Pointer: Not Supported 00:08:55.368 Oversized SGL: Not Supported 00:08:55.368 SGL Metadata Address: Not Supported 00:08:55.368 SGL Offset: Not Supported 00:08:55.368 Transport SGL Data Block: Not Supported 00:08:55.368 Replay Protected Memory Block: Not Supported 00:08:55.368 00:08:55.368 Firmware Slot Information 00:08:55.368 ========================= 00:08:55.368 Active slot: 1 00:08:55.368 Slot 1 Firmware Revision: 1.0 00:08:55.368 00:08:55.368 00:08:55.368 Commands Supported and Effects 00:08:55.368 ============================== 00:08:55.368 Admin Commands 00:08:55.368 -------------- 00:08:55.368 Delete I/O Submission Queue (00h): Supported 00:08:55.368 Create I/O Submission Queue (01h): Supported 00:08:55.368 Get Log Page (02h): Supported 00:08:55.368 Delete I/O Completion Queue (04h): Supported 00:08:55.368 Create I/O Completion Queue (05h): Supported 00:08:55.368 Identify (06h): Supported 00:08:55.368 Abort (08h): Supported 00:08:55.368 Set Features (09h): Supported 00:08:55.368 Get Features (0Ah): Supported 00:08:55.368 Asynchronous Event Request (0Ch): Supported 00:08:55.368 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:55.368 Directive Send (19h): Supported 00:08:55.368 Directive Receive (1Ah): Supported 00:08:55.368 Virtualization Management (1Ch): Supported 00:08:55.368 Doorbell Buffer Config (7Ch): Supported 00:08:55.368 Format NVM (80h): Supported LBA-Change 00:08:55.368 I/O Commands 00:08:55.368 ------------ 00:08:55.368 Flush (00h): Supported LBA-Change 00:08:55.368 Write (01h): Supported LBA-Change 00:08:55.368 Read (02h): Supported 00:08:55.368 Compare (05h): Supported 00:08:55.368 Write Zeroes (08h): Supported LBA-Change 00:08:55.368 Dataset Management (09h): Supported LBA-Change 00:08:55.368 Unknown (0Ch): Supported 00:08:55.368 Unknown (12h): Supported 00:08:55.368 Copy (19h): Supported LBA-Change 00:08:55.368 Unknown (1Dh): Supported LBA-Change 00:08:55.368 00:08:55.368 Error Log 00:08:55.368 ========= 00:08:55.368 00:08:55.368 Arbitration 00:08:55.368 =========== 00:08:55.368 Arbitration Burst: no limit 00:08:55.368 00:08:55.368 Power Management 00:08:55.368 ================ 00:08:55.368 Number of Power States: 1 00:08:55.368 Current Power State: Power State #0 00:08:55.368 Power State #0: 00:08:55.368 Max Power: 25.00 W 00:08:55.368 Non-Operational State: Operational 00:08:55.368 Entry Latency: 16 microseconds 00:08:55.368 Exit Latency: 4 microseconds 00:08:55.368 Relative Read Throughput: 0 00:08:55.368 Relative Read Latency: 0 00:08:55.368 Relative Write Throughput: 0 00:08:55.368 Relative Write Latency: 0 00:08:55.368 Idle Power[2024-10-09 07:48:57.354914] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 64748 terminated unexpected 00:08:55.368 : Not Reported 00:08:55.368 Active Power: Not Reported 00:08:55.368 Non-Operational Permissive Mode: Not Supported 00:08:55.368 00:08:55.368 Health Information 00:08:55.368 ================== 00:08:55.368 Critical Warnings: 00:08:55.368 Available Spare Space: OK 00:08:55.368 Temperature: OK 00:08:55.368 Device Reliability: OK 00:08:55.368 Read Only: No 00:08:55.368 Volatile Memory Backup: OK 00:08:55.368 Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.368 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:55.368 Available Spare: 0% 00:08:55.368 Available Spare Threshold: 0% 00:08:55.368 Life Percentage Used: 0% 00:08:55.368 Data Units Read: 651 00:08:55.368 Data Units Written: 579 00:08:55.368 Host Read Commands: 32026 00:08:55.368 Host Write Commands: 31828 00:08:55.368 Controller Busy Time: 0 minutes 00:08:55.368 Power Cycles: 0 00:08:55.368 Power On Hours: 0 hours 00:08:55.368 Unsafe Shutdowns: 0 00:08:55.368 Unrecoverable Media Errors: 0 00:08:55.368 Lifetime Error Log Entries: 0 00:08:55.368 Warning Temperature Time: 0 minutes 00:08:55.368 Critical Temperature Time: 0 minutes 00:08:55.368 00:08:55.368 Number of Queues 00:08:55.368 ================ 00:08:55.368 Number of I/O Submission Queues: 64 00:08:55.368 Number of I/O Completion Queues: 64 00:08:55.368 00:08:55.368 ZNS Specific Controller Data 00:08:55.368 ============================ 00:08:55.368 Zone Append Size Limit: 0 00:08:55.368 00:08:55.368 00:08:55.368 Active Namespaces 00:08:55.368 ================= 00:08:55.368 Namespace ID:1 00:08:55.368 Error Recovery Timeout: Unlimited 00:08:55.368 Command Set Identifier: NVM (00h) 00:08:55.368 Deallocate: Supported 00:08:55.368 Deallocated/Unwritten Error: Supported 00:08:55.368 Deallocated Read Value: All 0x00 00:08:55.368 Deallocate in Write Zeroes: Not Supported 00:08:55.368 Deallocated Guard Field: 0xFFFF 00:08:55.368 Flush: Supported 00:08:55.368 Reservation: Not Supported 00:08:55.368 Metadata Transferred as: Separate Metadata Buffer 00:08:55.368 Namespace Sharing Capabilities: Private 00:08:55.368 Size (in LBAs): 1548666 (5GiB) 00:08:55.368 Capacity (in LBAs): 1548666 (5GiB) 00:08:55.368 Utilization (in LBAs): 1548666 (5GiB) 00:08:55.368 Thin Provisioning: Not Supported 00:08:55.368 Per-NS Atomic Units: No 00:08:55.368 Maximum Single Source Range Length: 128 00:08:55.368 Maximum Copy Length: 128 00:08:55.368 Maximum Source Range Count: 128 00:08:55.368 NGUID/EUI64 Never Reused: No 00:08:55.368 Namespace Write Protected: No 00:08:55.368 Number of LBA Formats: 8 00:08:55.368 Current LBA Format: LBA Format #07 00:08:55.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:55.368 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:55.368 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:55.368 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:55.368 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:55.368 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:55.368 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:55.368 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:55.368 00:08:55.368 NVM Specific Namespace Data 00:08:55.368 =========================== 00:08:55.368 Logical Block Storage Tag Mask: 0 00:08:55.368 Protection Information Capabilities: 00:08:55.368 16b Guard Protection Information Storage Tag Support: No 00:08:55.368 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:55.368 Storage Tag Check Read Support: No 00:08:55.368 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.368 ===================================================== 00:08:55.368 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.368 ===================================================== 00:08:55.368 Controller Capabilities/Features 00:08:55.368 ================================ 00:08:55.368 Vendor ID: 1b36 00:08:55.368 Subsystem Vendor ID: 1af4 00:08:55.368 Serial Number: 12341 00:08:55.368 Model Number: QEMU NVMe Ctrl 00:08:55.368 Firmware Version: 8.0.0 00:08:55.368 Recommended Arb Burst: 6 00:08:55.368 IEEE OUI Identifier: 00 54 52 00:08:55.368 Multi-path I/O 00:08:55.368 May have multiple subsystem ports: No 00:08:55.368 May have multiple controllers: No 00:08:55.368 Associated with SR-IOV VF: No 00:08:55.368 Max Data Transfer Size: 524288 00:08:55.368 Max Number of Namespaces: 256 00:08:55.368 Max Number of I/O Queues: 64 00:08:55.368 NVMe Specification Version (VS): 1.4 00:08:55.368 NVMe Specification Version (Identify): 1.4 00:08:55.368 Maximum Queue Entries: 2048 00:08:55.368 Contiguous Queues Required: Yes 00:08:55.368 Arbitration Mechanisms Supported 00:08:55.368 Weighted Round Robin: Not Supported 00:08:55.368 Vendor Specific: Not Supported 00:08:55.368 Reset Timeout: 7500 ms 00:08:55.368 Doorbell Stride: 4 bytes 00:08:55.368 NVM Subsystem Reset: Not Supported 00:08:55.368 Command Sets Supported 00:08:55.368 NVM Command Set: Supported 00:08:55.368 Boot Partition: Not Supported 00:08:55.368 Memory Page Size Minimum: 4096 bytes 00:08:55.368 Memory Page Size Maximum: 65536 bytes 00:08:55.368 Persistent Memory Region: Not Supported 00:08:55.368 Optional Asynchronous Events Supported 00:08:55.368 Namespace Attribute Notices: Supported 00:08:55.368 Firmware Activation Notices: Not Supported 00:08:55.368 ANA Change Notices: Not Supported 00:08:55.368 PLE Aggregate Log Change Notices: Not Supported 00:08:55.368 LBA Status Info Alert Notices: Not Supported 00:08:55.368 EGE Aggregate Log Change Notices: Not Supported 00:08:55.368 Normal NVM Subsystem Shutdown event: Not Supported 00:08:55.368 Zone Descriptor Change Notices: Not Supported 00:08:55.368 Discovery Log Change Notices: Not Supported 00:08:55.368 Controller Attributes 00:08:55.369 128-bit Host Identifier: Not Supported 00:08:55.369 Non-Operational Permissive Mode: Not Supported 00:08:55.369 NVM Sets: Not Supported 00:08:55.369 Read Recovery Levels: Not Supported 00:08:55.369 Endurance Groups: Not Supported 00:08:55.369 Predictable Latency Mode: Not Supported 00:08:55.369 Traffic Based Keep ALive: Not Supported 00:08:55.369 Namespace Granularity: Not Supported 00:08:55.369 SQ Associations: Not Supported 00:08:55.369 UUID List: Not Supported 00:08:55.369 Multi-Domain Subsystem: Not Supported 00:08:55.369 Fixed Capacity Management: Not Supported 00:08:55.369 Variable Capacity Management: Not Supported 00:08:55.369 Delete Endurance Group: Not Supported 00:08:55.369 Delete NVM Set: Not Supported 00:08:55.369 Extended LBA Formats Supported: Supported 00:08:55.369 Flexible Data Placement Supported: Not Supported 00:08:55.369 00:08:55.369 Controller Memory Buffer Support 00:08:55.369 ================================ 00:08:55.369 Supported: No 00:08:55.369 00:08:55.369 Persistent Memory Region Support 00:08:55.369 ================================ 00:08:55.369 Supported: No 00:08:55.369 00:08:55.369 Admin Command Set Attributes 00:08:55.369 ============================ 00:08:55.369 Security Send/Receive: Not Supported 00:08:55.369 Format NVM: Supported 00:08:55.369 Firmware Activate/Download: Not Supported 00:08:55.369 Namespace Management: Supported 00:08:55.369 Device Self-Test: Not Supported 00:08:55.369 Directives: Supported 00:08:55.369 NVMe-MI: Not Supported 00:08:55.369 Virtualization Management: Not Supported 00:08:55.369 Doorbell Buffer Config: Supported 00:08:55.369 Get LBA Status Capability: Not Supported 00:08:55.369 Command & Feature Lockdown Capability: Not Supported 00:08:55.369 Abort Command Limit: 4 00:08:55.369 Async Event Request Limit: 4 00:08:55.369 Number of Firmware Slots: N/A 00:08:55.369 Firmware Slot 1 Read-Only: N/A 00:08:55.369 Firmware Activation Without Reset: N/A 00:08:55.369 Multiple Update Detection Support: N/A 00:08:55.369 Firmware Update Granularity: No Information Provided 00:08:55.369 Per-Namespace SMART Log: Yes 00:08:55.369 Asymmetric Namespace Access Log Page: Not Supported 00:08:55.369 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:55.369 Command Effects Log Page: Supported 00:08:55.369 Get Log Page Extended Data: Supported 00:08:55.369 Telemetry Log Pages: Not Supported 00:08:55.369 Persistent Event Log Pages: Not Supported 00:08:55.369 Supported Log Pages Log Page: May Support 00:08:55.369 Commands Supported & Effects Log Page: Not Supported 00:08:55.369 Feature Identifiers & Effects Log Page:May Support 00:08:55.369 NVMe-MI Commands & Effects Log Page: May Support 00:08:55.369 Data Area 4 for Telemetry Log: Not Supported 00:08:55.369 Error Log Page Entries Supported: 1 00:08:55.369 Keep Alive: Not Supported 00:08:55.369 00:08:55.369 NVM Command Set Attributes 00:08:55.369 ========================== 00:08:55.369 Submission Queue Entry Size 00:08:55.369 Max: 64 00:08:55.369 Min: 64 00:08:55.369 Completion Queue Entry Size 00:08:55.369 Max: 16 00:08:55.369 Min: 16 00:08:55.369 Number of Namespaces: 256 00:08:55.369 Compare Command: Supported 00:08:55.369 Write Uncorrectable Command: Not Supported 00:08:55.369 Dataset Management Command: Supported 00:08:55.369 Write Zeroes Command: Supported 00:08:55.369 Set Features Save Field: Supported 00:08:55.369 Reservations: Not Supported 00:08:55.369 Timestamp: Supported 00:08:55.369 Copy: Supported 00:08:55.369 Volatile Write Cache: Present 00:08:55.369 Atomic Write Unit (Normal): 1 00:08:55.369 Atomic Write Unit (PFail): 1 00:08:55.369 Atomic Compare & Write Unit: 1 00:08:55.369 Fused Compare & Write: Not Supported 00:08:55.369 Scatter-Gather List 00:08:55.369 SGL Command Set: Supported 00:08:55.369 SGL Keyed: Not Supported 00:08:55.369 SGL Bit Bucket Descriptor: Not Supported 00:08:55.369 SGL Metadata Pointer: Not Supported 00:08:55.369 Oversized SGL: Not Supported 00:08:55.369 SGL Metadata Address: Not Supported 00:08:55.369 SGL Offset: Not Supported 00:08:55.369 Transport SGL Data Block: Not Supported 00:08:55.369 Replay Protected Memory Block: Not Supported 00:08:55.369 00:08:55.369 Firmware Slot Information 00:08:55.369 ========================= 00:08:55.369 Active slot: 1 00:08:55.369 Slot 1 Firmware Revision: 1.0 00:08:55.369 00:08:55.369 00:08:55.369 Commands Supported and Effects 00:08:55.369 ============================== 00:08:55.369 Admin Commands 00:08:55.369 -------------- 00:08:55.369 Delete I/O Submission Queue (00h): Supported 00:08:55.369 Create I/O Submission Queue (01h): Supported 00:08:55.369 Get Log Page (02h): Supported 00:08:55.369 Delete I/O Completion Queue (04h): Supported 00:08:55.369 Create I/O Completion Queue (05h): Supported 00:08:55.369 Identify (06h): Supported 00:08:55.369 Abort (08h): Supported 00:08:55.369 Set Features (09h): Supported 00:08:55.369 Get Features (0Ah): Supported 00:08:55.369 Asynchronous Event Request (0Ch): Supported 00:08:55.369 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:55.369 Directive Send (19h): Supported 00:08:55.369 Directive Receive (1Ah): Supported 00:08:55.369 Virtualization Management (1Ch): Supported 00:08:55.369 Doorbell Buffer Config (7Ch): Supported 00:08:55.369 Format NVM (80h): Supported LBA-Change 00:08:55.369 I/O Commands 00:08:55.369 ------------ 00:08:55.369 Flush (00h): Supported LBA-Change 00:08:55.369 Write (01h): Supported LBA-Change 00:08:55.369 Read (02h): Supported 00:08:55.369 Compare (05h): Supported 00:08:55.369 Write Zeroes (08h): Supported LBA-Change 00:08:55.369 Dataset Management (09h): Supported LBA-Change 00:08:55.369 Unknown (0Ch): Supported 00:08:55.369 Unknown (12h): Supported 00:08:55.369 Copy (19h): Supported LBA-Change 00:08:55.369 Unknown (1Dh): Supported LBA-Change 00:08:55.369 00:08:55.369 Error Log 00:08:55.369 ========= 00:08:55.369 00:08:55.369 Arbitration 00:08:55.369 =========== 00:08:55.369 Arbitration Burst: no limit 00:08:55.369 00:08:55.369 Power Management 00:08:55.369 ================ 00:08:55.369 Number of Power States: 1 00:08:55.369 Current Power State: Power State #0 00:08:55.369 Power State #0: 00:08:55.369 Max Power: 25.00 W 00:08:55.369 Non-Operational State: Operational 00:08:55.369 Entry Latency: 16 microseconds 00:08:55.369 Exit Latency: 4 microseconds 00:08:55.369 Relative Read Throughput: 0 00:08:55.369 Relative Read Latency: 0 00:08:55.369 Relative Write Throughput: 0 00:08:55.369 Relative Write Latency: 0 00:08:55.369 Idle Power: Not Reported 00:08:55.369 Active Power: Not Reported 00:08:55.369 Non-Operational Permissive Mode: Not Supported 00:08:55.369 00:08:55.369 Health Information 00:08:55.369 ================== 00:08:55.369 Critical Warnings: 00:08:55.369 Available Spare Space: OK 00:08:55.369 Temperature: [2024-10-09 07:48:57.356102] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 64748 terminated unexpected 00:08:55.369 OK 00:08:55.369 Device Reliability: OK 00:08:55.369 Read Only: No 00:08:55.369 Volatile Memory Backup: OK 00:08:55.369 Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.369 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:55.369 Available Spare: 0% 00:08:55.369 Available Spare Threshold: 0% 00:08:55.369 Life Percentage Used: 0% 00:08:55.369 Data Units Read: 958 00:08:55.369 Data Units Written: 818 00:08:55.369 Host Read Commands: 47306 00:08:55.369 Host Write Commands: 46003 00:08:55.369 Controller Busy Time: 0 minutes 00:08:55.369 Power Cycles: 0 00:08:55.369 Power On Hours: 0 hours 00:08:55.369 Unsafe Shutdowns: 0 00:08:55.369 Unrecoverable Media Errors: 0 00:08:55.369 Lifetime Error Log Entries: 0 00:08:55.369 Warning Temperature Time: 0 minutes 00:08:55.369 Critical Temperature Time: 0 minutes 00:08:55.369 00:08:55.369 Number of Queues 00:08:55.369 ================ 00:08:55.369 Number of I/O Submission Queues: 64 00:08:55.369 Number of I/O Completion Queues: 64 00:08:55.369 00:08:55.369 ZNS Specific Controller Data 00:08:55.369 ============================ 00:08:55.369 Zone Append Size Limit: 0 00:08:55.369 00:08:55.369 00:08:55.369 Active Namespaces 00:08:55.369 ================= 00:08:55.369 Namespace ID:1 00:08:55.369 Error Recovery Timeout: Unlimited 00:08:55.369 Command Set Identifier: NVM (00h) 00:08:55.369 Deallocate: Supported 00:08:55.369 Deallocated/Unwritten Error: Supported 00:08:55.369 Deallocated Read Value: All 0x00 00:08:55.369 Deallocate in Write Zeroes: Not Supported 00:08:55.369 Deallocated Guard Field: 0xFFFF 00:08:55.369 Flush: Supported 00:08:55.369 Reservation: Not Supported 00:08:55.369 Namespace Sharing Capabilities: Private 00:08:55.369 Size (in LBAs): 1310720 (5GiB) 00:08:55.369 Capacity (in LBAs): 1310720 (5GiB) 00:08:55.369 Utilization (in LBAs): 1310720 (5GiB) 00:08:55.369 Thin Provisioning: Not Supported 00:08:55.369 Per-NS Atomic Units: No 00:08:55.369 Maximum Single Source Range Length: 128 00:08:55.369 Maximum Copy Length: 128 00:08:55.369 Maximum Source Range Count: 128 00:08:55.369 NGUID/EUI64 Never Reused: No 00:08:55.369 Namespace Write Protected: No 00:08:55.369 Number of LBA Formats: 8 00:08:55.370 Current LBA Format: LBA Format #04 00:08:55.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:55.370 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:55.370 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:55.370 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:55.370 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:55.370 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:55.370 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:55.370 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:55.370 00:08:55.370 NVM Specific Namespace Data 00:08:55.370 =========================== 00:08:55.370 Logical Block Storage Tag Mask: 0 00:08:55.370 Protection Information Capabilities: 00:08:55.370 16b Guard Protection Information Storage Tag Support: No 00:08:55.370 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:55.370 Storage Tag Check Read Support: No 00:08:55.370 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.370 ===================================================== 00:08:55.370 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.370 ===================================================== 00:08:55.370 Controller Capabilities/Features 00:08:55.370 ================================ 00:08:55.370 Vendor ID: 1b36 00:08:55.370 Subsystem Vendor ID: 1af4 00:08:55.370 Serial Number: 12343 00:08:55.370 Model Number: QEMU NVMe Ctrl 00:08:55.370 Firmware Version: 8.0.0 00:08:55.370 Recommended Arb Burst: 6 00:08:55.370 IEEE OUI Identifier: 00 54 52 00:08:55.370 Multi-path I/O 00:08:55.370 May have multiple subsystem ports: No 00:08:55.370 May have multiple controllers: Yes 00:08:55.370 Associated with SR-IOV VF: No 00:08:55.370 Max Data Transfer Size: 524288 00:08:55.370 Max Number of Namespaces: 256 00:08:55.370 Max Number of I/O Queues: 64 00:08:55.370 NVMe Specification Version (VS): 1.4 00:08:55.370 NVMe Specification Version (Identify): 1.4 00:08:55.370 Maximum Queue Entries: 2048 00:08:55.370 Contiguous Queues Required: Yes 00:08:55.370 Arbitration Mechanisms Supported 00:08:55.370 Weighted Round Robin: Not Supported 00:08:55.370 Vendor Specific: Not Supported 00:08:55.370 Reset Timeout: 7500 ms 00:08:55.370 Doorbell Stride: 4 bytes 00:08:55.370 NVM Subsystem Reset: Not Supported 00:08:55.370 Command Sets Supported 00:08:55.370 NVM Command Set: Supported 00:08:55.370 Boot Partition: Not Supported 00:08:55.370 Memory Page Size Minimum: 4096 bytes 00:08:55.370 Memory Page Size Maximum: 65536 bytes 00:08:55.370 Persistent Memory Region: Not Supported 00:08:55.370 Optional Asynchronous Events Supported 00:08:55.370 Namespace Attribute Notices: Supported 00:08:55.370 Firmware Activation Notices: Not Supported 00:08:55.370 ANA Change Notices: Not Supported 00:08:55.370 PLE Aggregate Log Change Notices: Not Supported 00:08:55.370 LBA Status Info Alert Notices: Not Supported 00:08:55.370 EGE Aggregate Log Change Notices: Not Supported 00:08:55.370 Normal NVM Subsystem Shutdown event: Not Supported 00:08:55.370 Zone Descriptor Change Notices: Not Supported 00:08:55.370 Discovery Log Change Notices: Not Supported 00:08:55.370 Controller Attributes 00:08:55.370 128-bit Host Identifier: Not Supported 00:08:55.370 Non-Operational Permissive Mode: Not Supported 00:08:55.370 NVM Sets: Not Supported 00:08:55.370 Read Recovery Levels: Not Supported 00:08:55.370 Endurance Groups: Supported 00:08:55.370 Predictable Latency Mode: Not Supported 00:08:55.370 Traffic Based Keep ALive: Not Supported 00:08:55.370 Namespace Granularity: Not Supported 00:08:55.370 SQ Associations: Not Supported 00:08:55.370 UUID List: Not Supported 00:08:55.370 Multi-Domain Subsystem: Not Supported 00:08:55.370 Fixed Capacity Management: Not Supported 00:08:55.370 Variable Capacity Management: Not Supported 00:08:55.370 Delete Endurance Group: Not Supported 00:08:55.370 Delete NVM Set: Not Supported 00:08:55.370 Extended LBA Formats Supported: Supported 00:08:55.370 Flexible Data Placement Supported: Supported 00:08:55.370 00:08:55.370 Controller Memory Buffer Support 00:08:55.370 ================================ 00:08:55.370 Supported: No 00:08:55.370 00:08:55.370 Persistent Memory Region Support 00:08:55.370 ================================ 00:08:55.370 Supported: No 00:08:55.370 00:08:55.370 Admin Command Set Attributes 00:08:55.370 ============================ 00:08:55.370 Security Send/Receive: Not Supported 00:08:55.370 Format NVM: Supported 00:08:55.370 Firmware Activate/Download: Not Supported 00:08:55.370 Namespace Management: Supported 00:08:55.370 Device Self-Test: Not Supported 00:08:55.370 Directives: Supported 00:08:55.370 NVMe-MI: Not Supported 00:08:55.370 Virtualization Management: Not Supported 00:08:55.370 Doorbell Buffer Config: Supported 00:08:55.370 Get LBA Status Capability: Not Supported 00:08:55.370 Command & Feature Lockdown Capability: Not Supported 00:08:55.370 Abort Command Limit: 4 00:08:55.370 Async Event Request Limit: 4 00:08:55.370 Number of Firmware Slots: N/A 00:08:55.370 Firmware Slot 1 Read-Only: N/A 00:08:55.370 Firmware Activation Without Reset: N/A 00:08:55.370 Multiple Update Detection Support: N/A 00:08:55.370 Firmware Update Granularity: No Information Provided 00:08:55.370 Per-Namespace SMART Log: Yes 00:08:55.370 Asymmetric Namespace Access Log Page: Not Supported 00:08:55.370 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:55.370 Command Effects Log Page: Supported 00:08:55.370 Get Log Page Extended Data: Supported 00:08:55.370 Telemetry Log Pages: Not Supported 00:08:55.370 Persistent Event Log Pages: Not Supported 00:08:55.370 Supported Log Pages Log Page: May Support 00:08:55.370 Commands Supported & Effects Log Page: Not Supported 00:08:55.370 Feature Identifiers & Effects Log Page:May Support 00:08:55.370 NVMe-MI Commands & Effects Log Page: May Support 00:08:55.370 Data Area 4 for Telemetry Log: Not Supported 00:08:55.370 Error Log Page Entries Supported: 1 00:08:55.370 Keep Alive: Not Supported 00:08:55.370 00:08:55.370 NVM Command Set Attributes 00:08:55.370 ========================== 00:08:55.370 Submission Queue Entry Size 00:08:55.370 Max: 64 00:08:55.370 Min: 64 00:08:55.370 Completion Queue Entry Size 00:08:55.370 Max: 16 00:08:55.370 Min: 16 00:08:55.370 Number of Namespaces: 256 00:08:55.370 Compare Command: Supported 00:08:55.370 Write Uncorrectable Command: Not Supported 00:08:55.370 Dataset Management Command: Supported 00:08:55.370 Write Zeroes Command: Supported 00:08:55.370 Set Features Save Field: Supported 00:08:55.370 Reservations: Not Supported 00:08:55.370 Timestamp: Supported 00:08:55.370 Copy: Supported 00:08:55.370 Volatile Write Cache: Present 00:08:55.370 Atomic Write Unit (Normal): 1 00:08:55.370 Atomic Write Unit (PFail): 1 00:08:55.370 Atomic Compare & Write Unit: 1 00:08:55.370 Fused Compare & Write: Not Supported 00:08:55.370 Scatter-Gather List 00:08:55.370 SGL Command Set: Supported 00:08:55.370 SGL Keyed: Not Supported 00:08:55.370 SGL Bit Bucket Descriptor: Not Supported 00:08:55.370 SGL Metadata Pointer: Not Supported 00:08:55.370 Oversized SGL: Not Supported 00:08:55.370 SGL Metadata Address: Not Supported 00:08:55.370 SGL Offset: Not Supported 00:08:55.370 Transport SGL Data Block: Not Supported 00:08:55.370 Replay Protected Memory Block: Not Supported 00:08:55.370 00:08:55.370 Firmware Slot Information 00:08:55.370 ========================= 00:08:55.370 Active slot: 1 00:08:55.370 Slot 1 Firmware Revision: 1.0 00:08:55.370 00:08:55.370 00:08:55.370 Commands Supported and Effects 00:08:55.370 ============================== 00:08:55.370 Admin Commands 00:08:55.370 -------------- 00:08:55.370 Delete I/O Submission Queue (00h): Supported 00:08:55.370 Create I/O Submission Queue (01h): Supported 00:08:55.370 Get Log Page (02h): Supported 00:08:55.370 Delete I/O Completion Queue (04h): Supported 00:08:55.370 Create I/O Completion Queue (05h): Supported 00:08:55.370 Identify (06h): Supported 00:08:55.370 Abort (08h): Supported 00:08:55.370 Set Features (09h): Supported 00:08:55.370 Get Features (0Ah): Supported 00:08:55.370 Asynchronous Event Request (0Ch): Supported 00:08:55.370 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:55.370 Directive Send (19h): Supported 00:08:55.370 Directive Receive (1Ah): Supported 00:08:55.370 Virtualization Management (1Ch): Supported 00:08:55.370 Doorbell Buffer Config (7Ch): Supported 00:08:55.370 Format NVM (80h): Supported LBA-Change 00:08:55.370 I/O Commands 00:08:55.370 ------------ 00:08:55.370 Flush (00h): Supported LBA-Change 00:08:55.370 Write (01h): Supported LBA-Change 00:08:55.370 Read (02h): Supported 00:08:55.370 Compare (05h): Supported 00:08:55.370 Write Zeroes (08h): Supported LBA-Change 00:08:55.370 Dataset Management (09h): Supported LBA-Change 00:08:55.370 Unknown (0Ch): Supported 00:08:55.371 Unknown (12h): Supported 00:08:55.371 Copy (19h): Supported LBA-Change 00:08:55.371 Unknown (1Dh): Supported LBA-Change 00:08:55.371 00:08:55.371 Error Log 00:08:55.371 ========= 00:08:55.371 00:08:55.371 Arbitration 00:08:55.371 =========== 00:08:55.371 Arbitration Burst: no limit 00:08:55.371 00:08:55.371 Power Management 00:08:55.371 ================ 00:08:55.371 Number of Power States: 1 00:08:55.371 Current Power State: Power State #0 00:08:55.371 Power State #0: 00:08:55.371 Max Power: 25.00 W 00:08:55.371 Non-Operational State: Operational 00:08:55.371 Entry Latency: 16 microseconds 00:08:55.371 Exit Latency: 4 microseconds 00:08:55.371 Relative Read Throughput: 0 00:08:55.371 Relative Read Latency: 0 00:08:55.371 Relative Write Throughput: 0 00:08:55.371 Relative Write Latency: 0 00:08:55.371 Idle Power: Not Reported 00:08:55.371 Active Power: Not Reported 00:08:55.371 Non-Operational Permissive Mode: Not Supported 00:08:55.371 00:08:55.371 Health Information 00:08:55.371 ================== 00:08:55.371 Critical Warnings: 00:08:55.371 Available Spare Space: OK 00:08:55.371 Temperature: OK 00:08:55.371 Device Reliability: OK 00:08:55.371 Read Only: No 00:08:55.371 Volatile Memory Backup: OK 00:08:55.371 Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.371 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:55.371 Available Spare: 0% 00:08:55.371 Available Spare Threshold: 0% 00:08:55.371 Life Percentage Used: 0% 00:08:55.371 Data Units Read: 775 00:08:55.371 Data Units Written: 704 00:08:55.371 Host Read Commands: 33313 00:08:55.371 Host Write Commands: 32737 00:08:55.371 Controller Busy Time: 0 minutes 00:08:55.371 Power Cycles: 0 00:08:55.371 Power On Hours: 0 hours 00:08:55.371 Unsafe Shutdowns: 0 00:08:55.371 Unrecoverable Media Errors: 0 00:08:55.371 Lifetime Error Log Entries: 0 00:08:55.371 Warning Temperature Time: 0 minutes 00:08:55.371 Critical Temperature Time: 0 minutes 00:08:55.371 00:08:55.371 Number of Queues 00:08:55.371 ================ 00:08:55.371 Number of I/O Submission Queues: 64 00:08:55.371 Number of I/O Completion Queues: 64 00:08:55.371 00:08:55.371 ZNS Specific Controller Data 00:08:55.371 ============================ 00:08:55.371 Zone Append Size Limit: 0 00:08:55.371 00:08:55.371 00:08:55.371 Active Namespaces 00:08:55.371 ================= 00:08:55.371 Namespace ID:1 00:08:55.371 Error Recovery Timeout: Unlimited 00:08:55.371 Command Set Identifier: NVM (00h) 00:08:55.371 Deallocate: Supported 00:08:55.371 Deallocated/Unwritten Error: Supported 00:08:55.371 Deallocated Read Value: All 0x00 00:08:55.371 Deallocate in Write Zeroes: Not Supported 00:08:55.371 Deallocated Guard Field: 0xFFFF 00:08:55.371 Flush: Supported 00:08:55.371 Reservation: Not Supported 00:08:55.371 Namespace Sharing Capabilities: Multiple Controllers 00:08:55.371 Size (in LBAs): 262144 (1GiB) 00:08:55.371 Capacity (in LBAs): 262144 (1GiB) 00:08:55.371 Utilization (in LBAs): 262144 (1GiB) 00:08:55.371 Thin Provisioning: Not Supported 00:08:55.371 Per-NS Atomic Units: No 00:08:55.371 Maximum Single Source Range Length: 128 00:08:55.371 Maximum Copy Length: 128 00:08:55.371 Maximum Source Range Count: 128 00:08:55.371 NGUID/EUI64 Never Reused: No 00:08:55.371 Namespace Write Protected: No 00:08:55.371 Endurance group ID: 1 00:08:55.371 Number of LBA Formats: 8 00:08:55.371 Current LBA Format: LBA Format #04 00:08:55.371 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:55.371 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:55.371 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:55.371 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:55.371 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:55.371 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:55.371 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:55.371 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:55.371 00:08:55.371 Get Feature FDP: 00:08:55.371 ================ 00:08:55.371 Enabled: Yes 00:08:55.371 FDP configuration index: 0 00:08:55.371 00:08:55.371 FDP configurations log page 00:08:55.371 =========================== 00:08:55.371 Number of FDP configurations: 1 00:08:55.371 Version: 0 00:08:55.371 Size: 112 00:08:55.371 FDP Configuration Descriptor: 0 00:08:55.371 Descriptor Size: 96 00:08:55.371 Reclaim Group Identifier format: 2 00:08:55.371 FDP Volatile Write Cache: Not Present 00:08:55.371 FDP Configuration: Valid 00:08:55.371 Vendor Specific Size: 0 00:08:55.371 Number of Reclaim Groups: 2 00:08:55.371 Number of Recalim Unit Handles: 8 00:08:55.371 Max Placement Identifiers: 128 00:08:55.371 Number of Namespaces Suppprted: 256 00:08:55.371 Reclaim unit Nominal Size: 6000000 bytes 00:08:55.371 Estimated Reclaim Unit Time Limit: Not Reported 00:08:55.371 RUH Desc #000: RUH Type: Initially Isolated 00:08:55.371 RUH Desc #001: RUH Type: Initially Isolated 00:08:55.371 RUH Desc #002: RUH Type: Initially Isolated 00:08:55.371 RUH Desc #003: RUH Type: Initially Isolated 00:08:55.371 RUH Desc #004: RUH Type: Initially Isolated 00:08:55.371 RUH Desc #005: RUH Type: Initially Isolated 00:08:55.371 RUH Desc #006: RUH Type: Initially Isolated 00:08:55.371 RUH Desc #007: RUH Type: Initially Isolated 00:08:55.371 00:08:55.371 FDP reclaim unit handle usage log page 00:08:55.371 ====================================== 00:08:55.371 Number of Reclaim Unit Handles: 8 00:08:55.371 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:55.371 RUH Usage Desc #001: RUH Attributes: Unused 00:08:55.371 RUH Usage Desc #002: RUH Attributes: Unused 00:08:55.371 RUH Usage Desc #003: RUH Attributes: Unused 00:08:55.371 RUH Usage Desc #004: RUH Attributes: Unused 00:08:55.371 RUH Usage Desc #005: RUH Attributes: Unused 00:08:55.371 RUH Usage Desc #006: RUH Attributes: Unused 00:08:55.371 RUH Usage Desc #007: RUH Attributes: Unused 00:08:55.371 00:08:55.371 FDP statistics log page 00:08:55.371 ======================= 00:08:55.371 Host bytes with metadata written: 417767424 00:08:55.371 Media[2024-10-09 07:48:57.358016] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 64748 terminated unexpected 00:08:55.371 bytes with metadata written: 417832960 00:08:55.371 Media bytes erased: 0 00:08:55.371 00:08:55.371 FDP events log page 00:08:55.371 =================== 00:08:55.371 Number of FDP events: 0 00:08:55.371 00:08:55.371 NVM Specific Namespace Data 00:08:55.371 =========================== 00:08:55.371 Logical Block Storage Tag Mask: 0 00:08:55.371 Protection Information Capabilities: 00:08:55.371 16b Guard Protection Information Storage Tag Support: No 00:08:55.371 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:55.371 Storage Tag Check Read Support: No 00:08:55.371 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.371 ===================================================== 00:08:55.371 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.371 ===================================================== 00:08:55.371 Controller Capabilities/Features 00:08:55.371 ================================ 00:08:55.371 Vendor ID: 1b36 00:08:55.371 Subsystem Vendor ID: 1af4 00:08:55.371 Serial Number: 12342 00:08:55.371 Model Number: QEMU NVMe Ctrl 00:08:55.371 Firmware Version: 8.0.0 00:08:55.371 Recommended Arb Burst: 6 00:08:55.371 IEEE OUI Identifier: 00 54 52 00:08:55.371 Multi-path I/O 00:08:55.371 May have multiple subsystem ports: No 00:08:55.371 May have multiple controllers: No 00:08:55.371 Associated with SR-IOV VF: No 00:08:55.371 Max Data Transfer Size: 524288 00:08:55.371 Max Number of Namespaces: 256 00:08:55.371 Max Number of I/O Queues: 64 00:08:55.371 NVMe Specification Version (VS): 1.4 00:08:55.371 NVMe Specification Version (Identify): 1.4 00:08:55.371 Maximum Queue Entries: 2048 00:08:55.371 Contiguous Queues Required: Yes 00:08:55.371 Arbitration Mechanisms Supported 00:08:55.371 Weighted Round Robin: Not Supported 00:08:55.371 Vendor Specific: Not Supported 00:08:55.371 Reset Timeout: 7500 ms 00:08:55.371 Doorbell Stride: 4 bytes 00:08:55.372 NVM Subsystem Reset: Not Supported 00:08:55.372 Command Sets Supported 00:08:55.372 NVM Command Set: Supported 00:08:55.372 Boot Partition: Not Supported 00:08:55.372 Memory Page Size Minimum: 4096 bytes 00:08:55.372 Memory Page Size Maximum: 65536 bytes 00:08:55.372 Persistent Memory Region: Not Supported 00:08:55.372 Optional Asynchronous Events Supported 00:08:55.372 Namespace Attribute Notices: Supported 00:08:55.372 Firmware Activation Notices: Not Supported 00:08:55.372 ANA Change Notices: Not Supported 00:08:55.372 PLE Aggregate Log Change Notices: Not Supported 00:08:55.372 LBA Status Info Alert Notices: Not Supported 00:08:55.372 EGE Aggregate Log Change Notices: Not Supported 00:08:55.372 Normal NVM Subsystem Shutdown event: Not Supported 00:08:55.372 Zone Descriptor Change Notices: Not Supported 00:08:55.372 Discovery Log Change Notices: Not Supported 00:08:55.372 Controller Attributes 00:08:55.372 128-bit Host Identifier: Not Supported 00:08:55.372 Non-Operational Permissive Mode: Not Supported 00:08:55.372 NVM Sets: Not Supported 00:08:55.372 Read Recovery Levels: Not Supported 00:08:55.372 Endurance Groups: Not Supported 00:08:55.372 Predictable Latency Mode: Not Supported 00:08:55.372 Traffic Based Keep ALive: Not Supported 00:08:55.372 Namespace Granularity: Not Supported 00:08:55.372 SQ Associations: Not Supported 00:08:55.372 UUID List: Not Supported 00:08:55.372 Multi-Domain Subsystem: Not Supported 00:08:55.372 Fixed Capacity Management: Not Supported 00:08:55.372 Variable Capacity Management: Not Supported 00:08:55.372 Delete Endurance Group: Not Supported 00:08:55.372 Delete NVM Set: Not Supported 00:08:55.372 Extended LBA Formats Supported: Supported 00:08:55.372 Flexible Data Placement Supported: Not Supported 00:08:55.372 00:08:55.372 Controller Memory Buffer Support 00:08:55.372 ================================ 00:08:55.372 Supported: No 00:08:55.372 00:08:55.372 Persistent Memory Region Support 00:08:55.372 ================================ 00:08:55.372 Supported: No 00:08:55.372 00:08:55.372 Admin Command Set Attributes 00:08:55.372 ============================ 00:08:55.372 Security Send/Receive: Not Supported 00:08:55.372 Format NVM: Supported 00:08:55.372 Firmware Activate/Download: Not Supported 00:08:55.372 Namespace Management: Supported 00:08:55.372 Device Self-Test: Not Supported 00:08:55.372 Directives: Supported 00:08:55.372 NVMe-MI: Not Supported 00:08:55.372 Virtualization Management: Not Supported 00:08:55.372 Doorbell Buffer Config: Supported 00:08:55.372 Get LBA Status Capability: Not Supported 00:08:55.372 Command & Feature Lockdown Capability: Not Supported 00:08:55.372 Abort Command Limit: 4 00:08:55.372 Async Event Request Limit: 4 00:08:55.372 Number of Firmware Slots: N/A 00:08:55.372 Firmware Slot 1 Read-Only: N/A 00:08:55.372 Firmware Activation Without Reset: N/A 00:08:55.372 Multiple Update Detection Support: N/A 00:08:55.372 Firmware Update Granularity: No Information Provided 00:08:55.372 Per-Namespace SMART Log: Yes 00:08:55.372 Asymmetric Namespace Access Log Page: Not Supported 00:08:55.372 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:55.372 Command Effects Log Page: Supported 00:08:55.372 Get Log Page Extended Data: Supported 00:08:55.372 Telemetry Log Pages: Not Supported 00:08:55.372 Persistent Event Log Pages: Not Supported 00:08:55.372 Supported Log Pages Log Page: May Support 00:08:55.372 Commands Supported & Effects Log Page: Not Supported 00:08:55.372 Feature Identifiers & Effects Log Page:May Support 00:08:55.372 NVMe-MI Commands & Effects Log Page: May Support 00:08:55.372 Data Area 4 for Telemetry Log: Not Supported 00:08:55.372 Error Log Page Entries Supported: 1 00:08:55.372 Keep Alive: Not Supported 00:08:55.372 00:08:55.372 NVM Command Set Attributes 00:08:55.372 ========================== 00:08:55.372 Submission Queue Entry Size 00:08:55.372 Max: 64 00:08:55.372 Min: 64 00:08:55.372 Completion Queue Entry Size 00:08:55.372 Max: 16 00:08:55.372 Min: 16 00:08:55.372 Number of Namespaces: 256 00:08:55.372 Compare Command: Supported 00:08:55.372 Write Uncorrectable Command: Not Supported 00:08:55.372 Dataset Management Command: Supported 00:08:55.372 Write Zeroes Command: Supported 00:08:55.372 Set Features Save Field: Supported 00:08:55.372 Reservations: Not Supported 00:08:55.372 Timestamp: Supported 00:08:55.372 Copy: Supported 00:08:55.372 Volatile Write Cache: Present 00:08:55.372 Atomic Write Unit (Normal): 1 00:08:55.372 Atomic Write Unit (PFail): 1 00:08:55.372 Atomic Compare & Write Unit: 1 00:08:55.372 Fused Compare & Write: Not Supported 00:08:55.372 Scatter-Gather List 00:08:55.372 SGL Command Set: Supported 00:08:55.372 SGL Keyed: Not Supported 00:08:55.372 SGL Bit Bucket Descriptor: Not Supported 00:08:55.372 SGL Metadata Pointer: Not Supported 00:08:55.372 Oversized SGL: Not Supported 00:08:55.372 SGL Metadata Address: Not Supported 00:08:55.372 SGL Offset: Not Supported 00:08:55.372 Transport SGL Data Block: Not Supported 00:08:55.372 Replay Protected Memory Block: Not Supported 00:08:55.372 00:08:55.372 Firmware Slot Information 00:08:55.372 ========================= 00:08:55.372 Active slot: 1 00:08:55.372 Slot 1 Firmware Revision: 1.0 00:08:55.372 00:08:55.372 00:08:55.372 Commands Supported and Effects 00:08:55.372 ============================== 00:08:55.372 Admin Commands 00:08:55.372 -------------- 00:08:55.372 Delete I/O Submission Queue (00h): Supported 00:08:55.372 Create I/O Submission Queue (01h): Supported 00:08:55.372 Get Log Page (02h): Supported 00:08:55.372 Delete I/O Completion Queue (04h): Supported 00:08:55.372 Create I/O Completion Queue (05h): Supported 00:08:55.372 Identify (06h): Supported 00:08:55.372 Abort (08h): Supported 00:08:55.372 Set Features (09h): Supported 00:08:55.372 Get Features (0Ah): Supported 00:08:55.372 Asynchronous Event Request (0Ch): Supported 00:08:55.372 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:55.372 Directive Send (19h): Supported 00:08:55.372 Directive Receive (1Ah): Supported 00:08:55.372 Virtualization Management (1Ch): Supported 00:08:55.372 Doorbell Buffer Config (7Ch): Supported 00:08:55.372 Format NVM (80h): Supported LBA-Change 00:08:55.372 I/O Commands 00:08:55.372 ------------ 00:08:55.372 Flush (00h): Supported LBA-Change 00:08:55.372 Write (01h): Supported LBA-Change 00:08:55.372 Read (02h): Supported 00:08:55.372 Compare (05h): Supported 00:08:55.372 Write Zeroes (08h): Supported LBA-Change 00:08:55.372 Dataset Management (09h): Supported LBA-Change 00:08:55.372 Unknown (0Ch): Supported 00:08:55.372 Unknown (12h): Supported 00:08:55.372 Copy (19h): Supported LBA-Change 00:08:55.372 Unknown (1Dh): Supported LBA-Change 00:08:55.372 00:08:55.372 Error Log 00:08:55.372 ========= 00:08:55.372 00:08:55.372 Arbitration 00:08:55.372 =========== 00:08:55.372 Arbitration Burst: no limit 00:08:55.372 00:08:55.372 Power Management 00:08:55.372 ================ 00:08:55.372 Number of Power States: 1 00:08:55.372 Current Power State: Power State #0 00:08:55.372 Power State #0: 00:08:55.372 Max Power: 25.00 W 00:08:55.372 Non-Operational State: Operational 00:08:55.372 Entry Latency: 16 microseconds 00:08:55.372 Exit Latency: 4 microseconds 00:08:55.372 Relative Read Throughput: 0 00:08:55.372 Relative Read Latency: 0 00:08:55.372 Relative Write Throughput: 0 00:08:55.372 Relative Write Latency: 0 00:08:55.372 Idle Power: Not Reported 00:08:55.372 Active Power: Not Reported 00:08:55.372 Non-Operational Permissive Mode: Not Supported 00:08:55.372 00:08:55.372 Health Information 00:08:55.372 ================== 00:08:55.372 Critical Warnings: 00:08:55.372 Available Spare Space: OK 00:08:55.372 Temperature: OK 00:08:55.372 Device Reliability: OK 00:08:55.372 Read Only: No 00:08:55.372 Volatile Memory Backup: OK 00:08:55.372 Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.372 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:55.372 Available Spare: 0% 00:08:55.372 Available Spare Threshold: 0% 00:08:55.372 Life Percentage Used: 0% 00:08:55.372 Data Units Read: 2067 00:08:55.372 Data Units Written: 1855 00:08:55.372 Host Read Commands: 97527 00:08:55.372 Host Write Commands: 95840 00:08:55.372 Controller Busy Time: 0 minutes 00:08:55.372 Power Cycles: 0 00:08:55.372 Power On Hours: 0 hours 00:08:55.372 Unsafe Shutdowns: 0 00:08:55.372 Unrecoverable Media Errors: 0 00:08:55.372 Lifetime Error Log Entries: 0 00:08:55.372 Warning Temperature Time: 0 minutes 00:08:55.372 Critical Temperature Time: 0 minutes 00:08:55.372 00:08:55.372 Number of Queues 00:08:55.372 ================ 00:08:55.372 Number of I/O Submission Queues: 64 00:08:55.373 Number of I/O Completion Queues: 64 00:08:55.373 00:08:55.373 ZNS Specific Controller Data 00:08:55.373 ============================ 00:08:55.373 Zone Append Size Limit: 0 00:08:55.373 00:08:55.373 00:08:55.373 Active Namespaces 00:08:55.373 ================= 00:08:55.373 Namespace ID:1 00:08:55.373 Error Recovery Timeout: Unlimited 00:08:55.373 Command Set Identifier: NVM (00h) 00:08:55.373 Deallocate: Supported 00:08:55.373 Deallocated/Unwritten Error: Supported 00:08:55.373 Deallocated Read Value: All 0x00 00:08:55.373 Deallocate in Write Zeroes: Not Supported 00:08:55.373 Deallocated Guard Field: 0xFFFF 00:08:55.373 Flush: Supported 00:08:55.373 Reservation: Not Supported 00:08:55.373 Namespace Sharing Capabilities: Private 00:08:55.373 Size (in LBAs): 1048576 (4GiB) 00:08:55.373 Capacity (in LBAs): 1048576 (4GiB) 00:08:55.373 Utilization (in LBAs): 1048576 (4GiB) 00:08:55.373 Thin Provisioning: Not Supported 00:08:55.373 Per-NS Atomic Units: No 00:08:55.373 Maximum Single Source Range Length: 128 00:08:55.373 Maximum Copy Length: 128 00:08:55.373 Maximum Source Range Count: 128 00:08:55.373 NGUID/EUI64 Never Reused: No 00:08:55.373 Namespace Write Protected: No 00:08:55.373 Number of LBA Formats: 8 00:08:55.373 Current LBA Format: LBA Format #04 00:08:55.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:55.373 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:55.373 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:55.373 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:55.373 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:55.373 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:55.373 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:55.373 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:55.373 00:08:55.373 NVM Specific Namespace Data 00:08:55.373 =========================== 00:08:55.373 Logical Block Storage Tag Mask: 0 00:08:55.373 Protection Information Capabilities: 00:08:55.373 16b Guard Protection Information Storage Tag Support: No 00:08:55.373 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:55.373 Storage Tag Check Read Support: No 00:08:55.373 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Namespace ID:2 00:08:55.373 Error Recovery Timeout: Unlimited 00:08:55.373 Command Set Identifier: NVM (00h) 00:08:55.373 Deallocate: Supported 00:08:55.373 Deallocated/Unwritten Error: Supported 00:08:55.373 Deallocated Read Value: All 0x00 00:08:55.373 Deallocate in Write Zeroes: Not Supported 00:08:55.373 Deallocated Guard Field: 0xFFFF 00:08:55.373 Flush: Supported 00:08:55.373 Reservation: Not Supported 00:08:55.373 Namespace Sharing Capabilities: Private 00:08:55.373 Size (in LBAs): 1048576 (4GiB) 00:08:55.373 Capacity (in LBAs): 1048576 (4GiB) 00:08:55.373 Utilization (in LBAs): 1048576 (4GiB) 00:08:55.373 Thin Provisioning: Not Supported 00:08:55.373 Per-NS Atomic Units: No 00:08:55.373 Maximum Single Source Range Length: 128 00:08:55.373 Maximum Copy Length: 128 00:08:55.373 Maximum Source Range Count: 128 00:08:55.373 NGUID/EUI64 Never Reused: No 00:08:55.373 Namespace Write Protected: No 00:08:55.373 Number of LBA Formats: 8 00:08:55.373 Current LBA Format: LBA Format #04 00:08:55.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:55.373 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:55.373 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:55.373 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:55.373 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:55.373 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:55.373 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:55.373 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:55.373 00:08:55.373 NVM Specific Namespace Data 00:08:55.373 =========================== 00:08:55.373 Logical Block Storage Tag Mask: 0 00:08:55.373 Protection Information Capabilities: 00:08:55.373 16b Guard Protection Information Storage Tag Support: No 00:08:55.373 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:55.373 Storage Tag Check Read Support: No 00:08:55.373 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.373 Namespace ID:3 00:08:55.373 Error Recovery Timeout: Unlimited 00:08:55.373 Command Set Identifier: NVM (00h) 00:08:55.373 Deallocate: Supported 00:08:55.373 Deallocated/Unwritten Error: Supported 00:08:55.373 Deallocated Read Value: All 0x00 00:08:55.373 Deallocate in Write Zeroes: Not Supported 00:08:55.373 Deallocated Guard Field: 0xFFFF 00:08:55.373 Flush: Supported 00:08:55.373 Reservation: Not Supported 00:08:55.373 Namespace Sharing Capabilities: Private 00:08:55.373 Size (in LBAs): 1048576 (4GiB) 00:08:55.631 Capacity (in LBAs): 1048576 (4GiB) 00:08:55.631 Utilization (in LBAs): 1048576 (4GiB) 00:08:55.631 Thin Provisioning: Not Supported 00:08:55.631 Per-NS Atomic Units: No 00:08:55.631 Maximum Single Source Range Length: 128 00:08:55.631 Maximum Copy Length: 128 00:08:55.631 Maximum Source Range Count: 128 00:08:55.631 NGUID/EUI64 Never Reused: No 00:08:55.631 Namespace Write Protected: No 00:08:55.631 Number of LBA Formats: 8 00:08:55.631 Current LBA Format: LBA Format #04 00:08:55.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:55.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:55.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:55.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:55.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:55.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:55.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:55.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:55.631 00:08:55.631 NVM Specific Namespace Data 00:08:55.631 =========================== 00:08:55.631 Logical Block Storage Tag Mask: 0 00:08:55.631 Protection Information Capabilities: 00:08:55.631 16b Guard Protection Information Storage Tag Support: No 00:08:55.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:55.631 Storage Tag Check Read Support: No 00:08:55.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.631 07:48:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:55.631 07:48:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:55.891 ===================================================== 00:08:55.891 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.891 ===================================================== 00:08:55.891 Controller Capabilities/Features 00:08:55.891 ================================ 00:08:55.891 Vendor ID: 1b36 00:08:55.891 Subsystem Vendor ID: 1af4 00:08:55.891 Serial Number: 12340 00:08:55.891 Model Number: QEMU NVMe Ctrl 00:08:55.891 Firmware Version: 8.0.0 00:08:55.891 Recommended Arb Burst: 6 00:08:55.891 IEEE OUI Identifier: 00 54 52 00:08:55.891 Multi-path I/O 00:08:55.891 May have multiple subsystem ports: No 00:08:55.891 May have multiple controllers: No 00:08:55.891 Associated with SR-IOV VF: No 00:08:55.891 Max Data Transfer Size: 524288 00:08:55.891 Max Number of Namespaces: 256 00:08:55.891 Max Number of I/O Queues: 64 00:08:55.891 NVMe Specification Version (VS): 1.4 00:08:55.891 NVMe Specification Version (Identify): 1.4 00:08:55.891 Maximum Queue Entries: 2048 00:08:55.891 Contiguous Queues Required: Yes 00:08:55.891 Arbitration Mechanisms Supported 00:08:55.891 Weighted Round Robin: Not Supported 00:08:55.891 Vendor Specific: Not Supported 00:08:55.891 Reset Timeout: 7500 ms 00:08:55.891 Doorbell Stride: 4 bytes 00:08:55.891 NVM Subsystem Reset: Not Supported 00:08:55.891 Command Sets Supported 00:08:55.891 NVM Command Set: Supported 00:08:55.891 Boot Partition: Not Supported 00:08:55.891 Memory Page Size Minimum: 4096 bytes 00:08:55.891 Memory Page Size Maximum: 65536 bytes 00:08:55.891 Persistent Memory Region: Not Supported 00:08:55.891 Optional Asynchronous Events Supported 00:08:55.891 Namespace Attribute Notices: Supported 00:08:55.891 Firmware Activation Notices: Not Supported 00:08:55.891 ANA Change Notices: Not Supported 00:08:55.891 PLE Aggregate Log Change Notices: Not Supported 00:08:55.891 LBA Status Info Alert Notices: Not Supported 00:08:55.891 EGE Aggregate Log Change Notices: Not Supported 00:08:55.891 Normal NVM Subsystem Shutdown event: Not Supported 00:08:55.891 Zone Descriptor Change Notices: Not Supported 00:08:55.891 Discovery Log Change Notices: Not Supported 00:08:55.891 Controller Attributes 00:08:55.891 128-bit Host Identifier: Not Supported 00:08:55.891 Non-Operational Permissive Mode: Not Supported 00:08:55.891 NVM Sets: Not Supported 00:08:55.891 Read Recovery Levels: Not Supported 00:08:55.891 Endurance Groups: Not Supported 00:08:55.891 Predictable Latency Mode: Not Supported 00:08:55.891 Traffic Based Keep ALive: Not Supported 00:08:55.891 Namespace Granularity: Not Supported 00:08:55.891 SQ Associations: Not Supported 00:08:55.891 UUID List: Not Supported 00:08:55.891 Multi-Domain Subsystem: Not Supported 00:08:55.891 Fixed Capacity Management: Not Supported 00:08:55.891 Variable Capacity Management: Not Supported 00:08:55.891 Delete Endurance Group: Not Supported 00:08:55.891 Delete NVM Set: Not Supported 00:08:55.891 Extended LBA Formats Supported: Supported 00:08:55.891 Flexible Data Placement Supported: Not Supported 00:08:55.891 00:08:55.891 Controller Memory Buffer Support 00:08:55.891 ================================ 00:08:55.891 Supported: No 00:08:55.891 00:08:55.891 Persistent Memory Region Support 00:08:55.891 ================================ 00:08:55.891 Supported: No 00:08:55.891 00:08:55.891 Admin Command Set Attributes 00:08:55.891 ============================ 00:08:55.891 Security Send/Receive: Not Supported 00:08:55.891 Format NVM: Supported 00:08:55.891 Firmware Activate/Download: Not Supported 00:08:55.891 Namespace Management: Supported 00:08:55.891 Device Self-Test: Not Supported 00:08:55.891 Directives: Supported 00:08:55.891 NVMe-MI: Not Supported 00:08:55.891 Virtualization Management: Not Supported 00:08:55.891 Doorbell Buffer Config: Supported 00:08:55.891 Get LBA Status Capability: Not Supported 00:08:55.891 Command & Feature Lockdown Capability: Not Supported 00:08:55.891 Abort Command Limit: 4 00:08:55.891 Async Event Request Limit: 4 00:08:55.891 Number of Firmware Slots: N/A 00:08:55.891 Firmware Slot 1 Read-Only: N/A 00:08:55.891 Firmware Activation Without Reset: N/A 00:08:55.891 Multiple Update Detection Support: N/A 00:08:55.891 Firmware Update Granularity: No Information Provided 00:08:55.891 Per-Namespace SMART Log: Yes 00:08:55.891 Asymmetric Namespace Access Log Page: Not Supported 00:08:55.891 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:55.891 Command Effects Log Page: Supported 00:08:55.891 Get Log Page Extended Data: Supported 00:08:55.891 Telemetry Log Pages: Not Supported 00:08:55.891 Persistent Event Log Pages: Not Supported 00:08:55.891 Supported Log Pages Log Page: May Support 00:08:55.891 Commands Supported & Effects Log Page: Not Supported 00:08:55.891 Feature Identifiers & Effects Log Page:May Support 00:08:55.891 NVMe-MI Commands & Effects Log Page: May Support 00:08:55.891 Data Area 4 for Telemetry Log: Not Supported 00:08:55.891 Error Log Page Entries Supported: 1 00:08:55.891 Keep Alive: Not Supported 00:08:55.891 00:08:55.891 NVM Command Set Attributes 00:08:55.891 ========================== 00:08:55.891 Submission Queue Entry Size 00:08:55.891 Max: 64 00:08:55.891 Min: 64 00:08:55.891 Completion Queue Entry Size 00:08:55.891 Max: 16 00:08:55.891 Min: 16 00:08:55.891 Number of Namespaces: 256 00:08:55.891 Compare Command: Supported 00:08:55.891 Write Uncorrectable Command: Not Supported 00:08:55.891 Dataset Management Command: Supported 00:08:55.891 Write Zeroes Command: Supported 00:08:55.891 Set Features Save Field: Supported 00:08:55.891 Reservations: Not Supported 00:08:55.891 Timestamp: Supported 00:08:55.891 Copy: Supported 00:08:55.891 Volatile Write Cache: Present 00:08:55.891 Atomic Write Unit (Normal): 1 00:08:55.891 Atomic Write Unit (PFail): 1 00:08:55.891 Atomic Compare & Write Unit: 1 00:08:55.891 Fused Compare & Write: Not Supported 00:08:55.891 Scatter-Gather List 00:08:55.891 SGL Command Set: Supported 00:08:55.891 SGL Keyed: Not Supported 00:08:55.891 SGL Bit Bucket Descriptor: Not Supported 00:08:55.891 SGL Metadata Pointer: Not Supported 00:08:55.891 Oversized SGL: Not Supported 00:08:55.891 SGL Metadata Address: Not Supported 00:08:55.891 SGL Offset: Not Supported 00:08:55.891 Transport SGL Data Block: Not Supported 00:08:55.891 Replay Protected Memory Block: Not Supported 00:08:55.891 00:08:55.891 Firmware Slot Information 00:08:55.891 ========================= 00:08:55.891 Active slot: 1 00:08:55.891 Slot 1 Firmware Revision: 1.0 00:08:55.891 00:08:55.891 00:08:55.891 Commands Supported and Effects 00:08:55.891 ============================== 00:08:55.891 Admin Commands 00:08:55.891 -------------- 00:08:55.891 Delete I/O Submission Queue (00h): Supported 00:08:55.891 Create I/O Submission Queue (01h): Supported 00:08:55.891 Get Log Page (02h): Supported 00:08:55.891 Delete I/O Completion Queue (04h): Supported 00:08:55.891 Create I/O Completion Queue (05h): Supported 00:08:55.891 Identify (06h): Supported 00:08:55.891 Abort (08h): Supported 00:08:55.891 Set Features (09h): Supported 00:08:55.891 Get Features (0Ah): Supported 00:08:55.891 Asynchronous Event Request (0Ch): Supported 00:08:55.891 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:55.891 Directive Send (19h): Supported 00:08:55.891 Directive Receive (1Ah): Supported 00:08:55.891 Virtualization Management (1Ch): Supported 00:08:55.891 Doorbell Buffer Config (7Ch): Supported 00:08:55.891 Format NVM (80h): Supported LBA-Change 00:08:55.891 I/O Commands 00:08:55.891 ------------ 00:08:55.892 Flush (00h): Supported LBA-Change 00:08:55.892 Write (01h): Supported LBA-Change 00:08:55.892 Read (02h): Supported 00:08:55.892 Compare (05h): Supported 00:08:55.892 Write Zeroes (08h): Supported LBA-Change 00:08:55.892 Dataset Management (09h): Supported LBA-Change 00:08:55.892 Unknown (0Ch): Supported 00:08:55.892 Unknown (12h): Supported 00:08:55.892 Copy (19h): Supported LBA-Change 00:08:55.892 Unknown (1Dh): Supported LBA-Change 00:08:55.892 00:08:55.892 Error Log 00:08:55.892 ========= 00:08:55.892 00:08:55.892 Arbitration 00:08:55.892 =========== 00:08:55.892 Arbitration Burst: no limit 00:08:55.892 00:08:55.892 Power Management 00:08:55.892 ================ 00:08:55.892 Number of Power States: 1 00:08:55.892 Current Power State: Power State #0 00:08:55.892 Power State #0: 00:08:55.892 Max Power: 25.00 W 00:08:55.892 Non-Operational State: Operational 00:08:55.892 Entry Latency: 16 microseconds 00:08:55.892 Exit Latency: 4 microseconds 00:08:55.892 Relative Read Throughput: 0 00:08:55.892 Relative Read Latency: 0 00:08:55.892 Relative Write Throughput: 0 00:08:55.892 Relative Write Latency: 0 00:08:55.892 Idle Power: Not Reported 00:08:55.892 Active Power: Not Reported 00:08:55.892 Non-Operational Permissive Mode: Not Supported 00:08:55.892 00:08:55.892 Health Information 00:08:55.892 ================== 00:08:55.892 Critical Warnings: 00:08:55.892 Available Spare Space: OK 00:08:55.892 Temperature: OK 00:08:55.892 Device Reliability: OK 00:08:55.892 Read Only: No 00:08:55.892 Volatile Memory Backup: OK 00:08:55.892 Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.892 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:55.892 Available Spare: 0% 00:08:55.892 Available Spare Threshold: 0% 00:08:55.892 Life Percentage Used: 0% 00:08:55.892 Data Units Read: 651 00:08:55.892 Data Units Written: 579 00:08:55.892 Host Read Commands: 32026 00:08:55.892 Host Write Commands: 31828 00:08:55.892 Controller Busy Time: 0 minutes 00:08:55.892 Power Cycles: 0 00:08:55.892 Power On Hours: 0 hours 00:08:55.892 Unsafe Shutdowns: 0 00:08:55.892 Unrecoverable Media Errors: 0 00:08:55.892 Lifetime Error Log Entries: 0 00:08:55.892 Warning Temperature Time: 0 minutes 00:08:55.892 Critical Temperature Time: 0 minutes 00:08:55.892 00:08:55.892 Number of Queues 00:08:55.892 ================ 00:08:55.892 Number of I/O Submission Queues: 64 00:08:55.892 Number of I/O Completion Queues: 64 00:08:55.892 00:08:55.892 ZNS Specific Controller Data 00:08:55.892 ============================ 00:08:55.892 Zone Append Size Limit: 0 00:08:55.892 00:08:55.892 00:08:55.892 Active Namespaces 00:08:55.892 ================= 00:08:55.892 Namespace ID:1 00:08:55.892 Error Recovery Timeout: Unlimited 00:08:55.892 Command Set Identifier: NVM (00h) 00:08:55.892 Deallocate: Supported 00:08:55.892 Deallocated/Unwritten Error: Supported 00:08:55.892 Deallocated Read Value: All 0x00 00:08:55.892 Deallocate in Write Zeroes: Not Supported 00:08:55.892 Deallocated Guard Field: 0xFFFF 00:08:55.892 Flush: Supported 00:08:55.892 Reservation: Not Supported 00:08:55.892 Metadata Transferred as: Separate Metadata Buffer 00:08:55.892 Namespace Sharing Capabilities: Private 00:08:55.892 Size (in LBAs): 1548666 (5GiB) 00:08:55.892 Capacity (in LBAs): 1548666 (5GiB) 00:08:55.892 Utilization (in LBAs): 1548666 (5GiB) 00:08:55.892 Thin Provisioning: Not Supported 00:08:55.892 Per-NS Atomic Units: No 00:08:55.892 Maximum Single Source Range Length: 128 00:08:55.892 Maximum Copy Length: 128 00:08:55.892 Maximum Source Range Count: 128 00:08:55.892 NGUID/EUI64 Never Reused: No 00:08:55.892 Namespace Write Protected: No 00:08:55.892 Number of LBA Formats: 8 00:08:55.892 Current LBA Format: LBA Format #07 00:08:55.892 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:55.892 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:55.892 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:55.892 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:55.892 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:55.892 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:55.892 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:55.892 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:55.892 00:08:55.892 NVM Specific Namespace Data 00:08:55.892 =========================== 00:08:55.892 Logical Block Storage Tag Mask: 0 00:08:55.892 Protection Information Capabilities: 00:08:55.892 16b Guard Protection Information Storage Tag Support: No 00:08:55.892 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:55.892 Storage Tag Check Read Support: No 00:08:55.892 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:55.892 07:48:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:55.892 07:48:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:56.152 ===================================================== 00:08:56.152 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.152 ===================================================== 00:08:56.152 Controller Capabilities/Features 00:08:56.152 ================================ 00:08:56.152 Vendor ID: 1b36 00:08:56.152 Subsystem Vendor ID: 1af4 00:08:56.152 Serial Number: 12341 00:08:56.152 Model Number: QEMU NVMe Ctrl 00:08:56.152 Firmware Version: 8.0.0 00:08:56.152 Recommended Arb Burst: 6 00:08:56.152 IEEE OUI Identifier: 00 54 52 00:08:56.152 Multi-path I/O 00:08:56.152 May have multiple subsystem ports: No 00:08:56.152 May have multiple controllers: No 00:08:56.152 Associated with SR-IOV VF: No 00:08:56.152 Max Data Transfer Size: 524288 00:08:56.152 Max Number of Namespaces: 256 00:08:56.152 Max Number of I/O Queues: 64 00:08:56.152 NVMe Specification Version (VS): 1.4 00:08:56.152 NVMe Specification Version (Identify): 1.4 00:08:56.152 Maximum Queue Entries: 2048 00:08:56.152 Contiguous Queues Required: Yes 00:08:56.152 Arbitration Mechanisms Supported 00:08:56.152 Weighted Round Robin: Not Supported 00:08:56.152 Vendor Specific: Not Supported 00:08:56.152 Reset Timeout: 7500 ms 00:08:56.152 Doorbell Stride: 4 bytes 00:08:56.152 NVM Subsystem Reset: Not Supported 00:08:56.152 Command Sets Supported 00:08:56.152 NVM Command Set: Supported 00:08:56.152 Boot Partition: Not Supported 00:08:56.152 Memory Page Size Minimum: 4096 bytes 00:08:56.152 Memory Page Size Maximum: 65536 bytes 00:08:56.152 Persistent Memory Region: Not Supported 00:08:56.152 Optional Asynchronous Events Supported 00:08:56.152 Namespace Attribute Notices: Supported 00:08:56.152 Firmware Activation Notices: Not Supported 00:08:56.152 ANA Change Notices: Not Supported 00:08:56.152 PLE Aggregate Log Change Notices: Not Supported 00:08:56.152 LBA Status Info Alert Notices: Not Supported 00:08:56.152 EGE Aggregate Log Change Notices: Not Supported 00:08:56.152 Normal NVM Subsystem Shutdown event: Not Supported 00:08:56.152 Zone Descriptor Change Notices: Not Supported 00:08:56.152 Discovery Log Change Notices: Not Supported 00:08:56.152 Controller Attributes 00:08:56.152 128-bit Host Identifier: Not Supported 00:08:56.152 Non-Operational Permissive Mode: Not Supported 00:08:56.152 NVM Sets: Not Supported 00:08:56.152 Read Recovery Levels: Not Supported 00:08:56.152 Endurance Groups: Not Supported 00:08:56.152 Predictable Latency Mode: Not Supported 00:08:56.152 Traffic Based Keep ALive: Not Supported 00:08:56.152 Namespace Granularity: Not Supported 00:08:56.152 SQ Associations: Not Supported 00:08:56.152 UUID List: Not Supported 00:08:56.152 Multi-Domain Subsystem: Not Supported 00:08:56.152 Fixed Capacity Management: Not Supported 00:08:56.152 Variable Capacity Management: Not Supported 00:08:56.152 Delete Endurance Group: Not Supported 00:08:56.152 Delete NVM Set: Not Supported 00:08:56.152 Extended LBA Formats Supported: Supported 00:08:56.152 Flexible Data Placement Supported: Not Supported 00:08:56.152 00:08:56.152 Controller Memory Buffer Support 00:08:56.152 ================================ 00:08:56.152 Supported: No 00:08:56.152 00:08:56.152 Persistent Memory Region Support 00:08:56.152 ================================ 00:08:56.152 Supported: No 00:08:56.152 00:08:56.152 Admin Command Set Attributes 00:08:56.152 ============================ 00:08:56.152 Security Send/Receive: Not Supported 00:08:56.152 Format NVM: Supported 00:08:56.152 Firmware Activate/Download: Not Supported 00:08:56.152 Namespace Management: Supported 00:08:56.152 Device Self-Test: Not Supported 00:08:56.152 Directives: Supported 00:08:56.152 NVMe-MI: Not Supported 00:08:56.152 Virtualization Management: Not Supported 00:08:56.152 Doorbell Buffer Config: Supported 00:08:56.152 Get LBA Status Capability: Not Supported 00:08:56.152 Command & Feature Lockdown Capability: Not Supported 00:08:56.152 Abort Command Limit: 4 00:08:56.152 Async Event Request Limit: 4 00:08:56.152 Number of Firmware Slots: N/A 00:08:56.152 Firmware Slot 1 Read-Only: N/A 00:08:56.152 Firmware Activation Without Reset: N/A 00:08:56.152 Multiple Update Detection Support: N/A 00:08:56.152 Firmware Update Granularity: No Information Provided 00:08:56.152 Per-Namespace SMART Log: Yes 00:08:56.152 Asymmetric Namespace Access Log Page: Not Supported 00:08:56.152 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:56.152 Command Effects Log Page: Supported 00:08:56.152 Get Log Page Extended Data: Supported 00:08:56.152 Telemetry Log Pages: Not Supported 00:08:56.152 Persistent Event Log Pages: Not Supported 00:08:56.152 Supported Log Pages Log Page: May Support 00:08:56.152 Commands Supported & Effects Log Page: Not Supported 00:08:56.152 Feature Identifiers & Effects Log Page:May Support 00:08:56.152 NVMe-MI Commands & Effects Log Page: May Support 00:08:56.152 Data Area 4 for Telemetry Log: Not Supported 00:08:56.152 Error Log Page Entries Supported: 1 00:08:56.152 Keep Alive: Not Supported 00:08:56.152 00:08:56.152 NVM Command Set Attributes 00:08:56.152 ========================== 00:08:56.152 Submission Queue Entry Size 00:08:56.152 Max: 64 00:08:56.152 Min: 64 00:08:56.152 Completion Queue Entry Size 00:08:56.152 Max: 16 00:08:56.152 Min: 16 00:08:56.152 Number of Namespaces: 256 00:08:56.152 Compare Command: Supported 00:08:56.152 Write Uncorrectable Command: Not Supported 00:08:56.152 Dataset Management Command: Supported 00:08:56.152 Write Zeroes Command: Supported 00:08:56.152 Set Features Save Field: Supported 00:08:56.152 Reservations: Not Supported 00:08:56.152 Timestamp: Supported 00:08:56.152 Copy: Supported 00:08:56.152 Volatile Write Cache: Present 00:08:56.152 Atomic Write Unit (Normal): 1 00:08:56.152 Atomic Write Unit (PFail): 1 00:08:56.152 Atomic Compare & Write Unit: 1 00:08:56.152 Fused Compare & Write: Not Supported 00:08:56.152 Scatter-Gather List 00:08:56.152 SGL Command Set: Supported 00:08:56.152 SGL Keyed: Not Supported 00:08:56.152 SGL Bit Bucket Descriptor: Not Supported 00:08:56.152 SGL Metadata Pointer: Not Supported 00:08:56.152 Oversized SGL: Not Supported 00:08:56.152 SGL Metadata Address: Not Supported 00:08:56.152 SGL Offset: Not Supported 00:08:56.152 Transport SGL Data Block: Not Supported 00:08:56.152 Replay Protected Memory Block: Not Supported 00:08:56.152 00:08:56.152 Firmware Slot Information 00:08:56.152 ========================= 00:08:56.152 Active slot: 1 00:08:56.152 Slot 1 Firmware Revision: 1.0 00:08:56.152 00:08:56.152 00:08:56.152 Commands Supported and Effects 00:08:56.152 ============================== 00:08:56.152 Admin Commands 00:08:56.152 -------------- 00:08:56.152 Delete I/O Submission Queue (00h): Supported 00:08:56.152 Create I/O Submission Queue (01h): Supported 00:08:56.152 Get Log Page (02h): Supported 00:08:56.152 Delete I/O Completion Queue (04h): Supported 00:08:56.152 Create I/O Completion Queue (05h): Supported 00:08:56.152 Identify (06h): Supported 00:08:56.152 Abort (08h): Supported 00:08:56.152 Set Features (09h): Supported 00:08:56.152 Get Features (0Ah): Supported 00:08:56.152 Asynchronous Event Request (0Ch): Supported 00:08:56.152 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:56.152 Directive Send (19h): Supported 00:08:56.152 Directive Receive (1Ah): Supported 00:08:56.152 Virtualization Management (1Ch): Supported 00:08:56.152 Doorbell Buffer Config (7Ch): Supported 00:08:56.152 Format NVM (80h): Supported LBA-Change 00:08:56.152 I/O Commands 00:08:56.152 ------------ 00:08:56.152 Flush (00h): Supported LBA-Change 00:08:56.152 Write (01h): Supported LBA-Change 00:08:56.152 Read (02h): Supported 00:08:56.152 Compare (05h): Supported 00:08:56.152 Write Zeroes (08h): Supported LBA-Change 00:08:56.152 Dataset Management (09h): Supported LBA-Change 00:08:56.152 Unknown (0Ch): Supported 00:08:56.152 Unknown (12h): Supported 00:08:56.152 Copy (19h): Supported LBA-Change 00:08:56.152 Unknown (1Dh): Supported LBA-Change 00:08:56.152 00:08:56.152 Error Log 00:08:56.152 ========= 00:08:56.152 00:08:56.152 Arbitration 00:08:56.152 =========== 00:08:56.152 Arbitration Burst: no limit 00:08:56.152 00:08:56.152 Power Management 00:08:56.152 ================ 00:08:56.152 Number of Power States: 1 00:08:56.152 Current Power State: Power State #0 00:08:56.152 Power State #0: 00:08:56.152 Max Power: 25.00 W 00:08:56.152 Non-Operational State: Operational 00:08:56.152 Entry Latency: 16 microseconds 00:08:56.153 Exit Latency: 4 microseconds 00:08:56.153 Relative Read Throughput: 0 00:08:56.153 Relative Read Latency: 0 00:08:56.153 Relative Write Throughput: 0 00:08:56.153 Relative Write Latency: 0 00:08:56.153 Idle Power: Not Reported 00:08:56.153 Active Power: Not Reported 00:08:56.153 Non-Operational Permissive Mode: Not Supported 00:08:56.153 00:08:56.153 Health Information 00:08:56.153 ================== 00:08:56.153 Critical Warnings: 00:08:56.153 Available Spare Space: OK 00:08:56.153 Temperature: OK 00:08:56.153 Device Reliability: OK 00:08:56.153 Read Only: No 00:08:56.153 Volatile Memory Backup: OK 00:08:56.153 Current Temperature: 323 Kelvin (50 Celsius) 00:08:56.153 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:56.153 Available Spare: 0% 00:08:56.153 Available Spare Threshold: 0% 00:08:56.153 Life Percentage Used: 0% 00:08:56.153 Data Units Read: 958 00:08:56.153 Data Units Written: 818 00:08:56.153 Host Read Commands: 47306 00:08:56.153 Host Write Commands: 46003 00:08:56.153 Controller Busy Time: 0 minutes 00:08:56.153 Power Cycles: 0 00:08:56.153 Power On Hours: 0 hours 00:08:56.153 Unsafe Shutdowns: 0 00:08:56.153 Unrecoverable Media Errors: 0 00:08:56.153 Lifetime Error Log Entries: 0 00:08:56.153 Warning Temperature Time: 0 minutes 00:08:56.153 Critical Temperature Time: 0 minutes 00:08:56.153 00:08:56.153 Number of Queues 00:08:56.153 ================ 00:08:56.153 Number of I/O Submission Queues: 64 00:08:56.153 Number of I/O Completion Queues: 64 00:08:56.153 00:08:56.153 ZNS Specific Controller Data 00:08:56.153 ============================ 00:08:56.153 Zone Append Size Limit: 0 00:08:56.153 00:08:56.153 00:08:56.153 Active Namespaces 00:08:56.153 ================= 00:08:56.153 Namespace ID:1 00:08:56.153 Error Recovery Timeout: Unlimited 00:08:56.153 Command Set Identifier: NVM (00h) 00:08:56.153 Deallocate: Supported 00:08:56.153 Deallocated/Unwritten Error: Supported 00:08:56.153 Deallocated Read Value: All 0x00 00:08:56.153 Deallocate in Write Zeroes: Not Supported 00:08:56.153 Deallocated Guard Field: 0xFFFF 00:08:56.153 Flush: Supported 00:08:56.153 Reservation: Not Supported 00:08:56.153 Namespace Sharing Capabilities: Private 00:08:56.153 Size (in LBAs): 1310720 (5GiB) 00:08:56.153 Capacity (in LBAs): 1310720 (5GiB) 00:08:56.153 Utilization (in LBAs): 1310720 (5GiB) 00:08:56.153 Thin Provisioning: Not Supported 00:08:56.153 Per-NS Atomic Units: No 00:08:56.153 Maximum Single Source Range Length: 128 00:08:56.153 Maximum Copy Length: 128 00:08:56.153 Maximum Source Range Count: 128 00:08:56.153 NGUID/EUI64 Never Reused: No 00:08:56.153 Namespace Write Protected: No 00:08:56.153 Number of LBA Formats: 8 00:08:56.153 Current LBA Format: LBA Format #04 00:08:56.153 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:56.153 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:56.153 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:56.153 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:56.153 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:56.153 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:56.153 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:56.153 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:56.153 00:08:56.153 NVM Specific Namespace Data 00:08:56.153 =========================== 00:08:56.153 Logical Block Storage Tag Mask: 0 00:08:56.153 Protection Information Capabilities: 00:08:56.153 16b Guard Protection Information Storage Tag Support: No 00:08:56.153 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:56.153 Storage Tag Check Read Support: No 00:08:56.153 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.153 07:48:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:56.153 07:48:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:56.412 ===================================================== 00:08:56.412 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.412 ===================================================== 00:08:56.412 Controller Capabilities/Features 00:08:56.412 ================================ 00:08:56.412 Vendor ID: 1b36 00:08:56.412 Subsystem Vendor ID: 1af4 00:08:56.412 Serial Number: 12342 00:08:56.412 Model Number: QEMU NVMe Ctrl 00:08:56.412 Firmware Version: 8.0.0 00:08:56.412 Recommended Arb Burst: 6 00:08:56.412 IEEE OUI Identifier: 00 54 52 00:08:56.412 Multi-path I/O 00:08:56.412 May have multiple subsystem ports: No 00:08:56.412 May have multiple controllers: No 00:08:56.412 Associated with SR-IOV VF: No 00:08:56.412 Max Data Transfer Size: 524288 00:08:56.412 Max Number of Namespaces: 256 00:08:56.412 Max Number of I/O Queues: 64 00:08:56.412 NVMe Specification Version (VS): 1.4 00:08:56.412 NVMe Specification Version (Identify): 1.4 00:08:56.412 Maximum Queue Entries: 2048 00:08:56.412 Contiguous Queues Required: Yes 00:08:56.412 Arbitration Mechanisms Supported 00:08:56.412 Weighted Round Robin: Not Supported 00:08:56.412 Vendor Specific: Not Supported 00:08:56.412 Reset Timeout: 7500 ms 00:08:56.412 Doorbell Stride: 4 bytes 00:08:56.412 NVM Subsystem Reset: Not Supported 00:08:56.412 Command Sets Supported 00:08:56.412 NVM Command Set: Supported 00:08:56.412 Boot Partition: Not Supported 00:08:56.412 Memory Page Size Minimum: 4096 bytes 00:08:56.412 Memory Page Size Maximum: 65536 bytes 00:08:56.412 Persistent Memory Region: Not Supported 00:08:56.412 Optional Asynchronous Events Supported 00:08:56.412 Namespace Attribute Notices: Supported 00:08:56.412 Firmware Activation Notices: Not Supported 00:08:56.412 ANA Change Notices: Not Supported 00:08:56.412 PLE Aggregate Log Change Notices: Not Supported 00:08:56.412 LBA Status Info Alert Notices: Not Supported 00:08:56.412 EGE Aggregate Log Change Notices: Not Supported 00:08:56.412 Normal NVM Subsystem Shutdown event: Not Supported 00:08:56.412 Zone Descriptor Change Notices: Not Supported 00:08:56.412 Discovery Log Change Notices: Not Supported 00:08:56.412 Controller Attributes 00:08:56.412 128-bit Host Identifier: Not Supported 00:08:56.412 Non-Operational Permissive Mode: Not Supported 00:08:56.412 NVM Sets: Not Supported 00:08:56.412 Read Recovery Levels: Not Supported 00:08:56.412 Endurance Groups: Not Supported 00:08:56.412 Predictable Latency Mode: Not Supported 00:08:56.412 Traffic Based Keep ALive: Not Supported 00:08:56.412 Namespace Granularity: Not Supported 00:08:56.412 SQ Associations: Not Supported 00:08:56.412 UUID List: Not Supported 00:08:56.412 Multi-Domain Subsystem: Not Supported 00:08:56.412 Fixed Capacity Management: Not Supported 00:08:56.412 Variable Capacity Management: Not Supported 00:08:56.412 Delete Endurance Group: Not Supported 00:08:56.412 Delete NVM Set: Not Supported 00:08:56.412 Extended LBA Formats Supported: Supported 00:08:56.412 Flexible Data Placement Supported: Not Supported 00:08:56.412 00:08:56.412 Controller Memory Buffer Support 00:08:56.412 ================================ 00:08:56.412 Supported: No 00:08:56.412 00:08:56.412 Persistent Memory Region Support 00:08:56.412 ================================ 00:08:56.412 Supported: No 00:08:56.412 00:08:56.412 Admin Command Set Attributes 00:08:56.412 ============================ 00:08:56.412 Security Send/Receive: Not Supported 00:08:56.412 Format NVM: Supported 00:08:56.412 Firmware Activate/Download: Not Supported 00:08:56.412 Namespace Management: Supported 00:08:56.412 Device Self-Test: Not Supported 00:08:56.412 Directives: Supported 00:08:56.412 NVMe-MI: Not Supported 00:08:56.412 Virtualization Management: Not Supported 00:08:56.412 Doorbell Buffer Config: Supported 00:08:56.412 Get LBA Status Capability: Not Supported 00:08:56.412 Command & Feature Lockdown Capability: Not Supported 00:08:56.412 Abort Command Limit: 4 00:08:56.412 Async Event Request Limit: 4 00:08:56.413 Number of Firmware Slots: N/A 00:08:56.413 Firmware Slot 1 Read-Only: N/A 00:08:56.413 Firmware Activation Without Reset: N/A 00:08:56.413 Multiple Update Detection Support: N/A 00:08:56.413 Firmware Update Granularity: No Information Provided 00:08:56.413 Per-Namespace SMART Log: Yes 00:08:56.413 Asymmetric Namespace Access Log Page: Not Supported 00:08:56.413 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:56.413 Command Effects Log Page: Supported 00:08:56.413 Get Log Page Extended Data: Supported 00:08:56.413 Telemetry Log Pages: Not Supported 00:08:56.413 Persistent Event Log Pages: Not Supported 00:08:56.413 Supported Log Pages Log Page: May Support 00:08:56.413 Commands Supported & Effects Log Page: Not Supported 00:08:56.413 Feature Identifiers & Effects Log Page:May Support 00:08:56.413 NVMe-MI Commands & Effects Log Page: May Support 00:08:56.413 Data Area 4 for Telemetry Log: Not Supported 00:08:56.413 Error Log Page Entries Supported: 1 00:08:56.413 Keep Alive: Not Supported 00:08:56.413 00:08:56.413 NVM Command Set Attributes 00:08:56.413 ========================== 00:08:56.413 Submission Queue Entry Size 00:08:56.413 Max: 64 00:08:56.413 Min: 64 00:08:56.413 Completion Queue Entry Size 00:08:56.413 Max: 16 00:08:56.413 Min: 16 00:08:56.413 Number of Namespaces: 256 00:08:56.413 Compare Command: Supported 00:08:56.413 Write Uncorrectable Command: Not Supported 00:08:56.413 Dataset Management Command: Supported 00:08:56.413 Write Zeroes Command: Supported 00:08:56.413 Set Features Save Field: Supported 00:08:56.413 Reservations: Not Supported 00:08:56.413 Timestamp: Supported 00:08:56.413 Copy: Supported 00:08:56.413 Volatile Write Cache: Present 00:08:56.413 Atomic Write Unit (Normal): 1 00:08:56.413 Atomic Write Unit (PFail): 1 00:08:56.413 Atomic Compare & Write Unit: 1 00:08:56.413 Fused Compare & Write: Not Supported 00:08:56.413 Scatter-Gather List 00:08:56.413 SGL Command Set: Supported 00:08:56.413 SGL Keyed: Not Supported 00:08:56.413 SGL Bit Bucket Descriptor: Not Supported 00:08:56.413 SGL Metadata Pointer: Not Supported 00:08:56.413 Oversized SGL: Not Supported 00:08:56.413 SGL Metadata Address: Not Supported 00:08:56.413 SGL Offset: Not Supported 00:08:56.413 Transport SGL Data Block: Not Supported 00:08:56.413 Replay Protected Memory Block: Not Supported 00:08:56.413 00:08:56.413 Firmware Slot Information 00:08:56.413 ========================= 00:08:56.413 Active slot: 1 00:08:56.413 Slot 1 Firmware Revision: 1.0 00:08:56.413 00:08:56.413 00:08:56.413 Commands Supported and Effects 00:08:56.413 ============================== 00:08:56.413 Admin Commands 00:08:56.413 -------------- 00:08:56.413 Delete I/O Submission Queue (00h): Supported 00:08:56.413 Create I/O Submission Queue (01h): Supported 00:08:56.413 Get Log Page (02h): Supported 00:08:56.413 Delete I/O Completion Queue (04h): Supported 00:08:56.413 Create I/O Completion Queue (05h): Supported 00:08:56.413 Identify (06h): Supported 00:08:56.413 Abort (08h): Supported 00:08:56.413 Set Features (09h): Supported 00:08:56.413 Get Features (0Ah): Supported 00:08:56.413 Asynchronous Event Request (0Ch): Supported 00:08:56.413 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:56.413 Directive Send (19h): Supported 00:08:56.413 Directive Receive (1Ah): Supported 00:08:56.413 Virtualization Management (1Ch): Supported 00:08:56.413 Doorbell Buffer Config (7Ch): Supported 00:08:56.413 Format NVM (80h): Supported LBA-Change 00:08:56.413 I/O Commands 00:08:56.413 ------------ 00:08:56.413 Flush (00h): Supported LBA-Change 00:08:56.413 Write (01h): Supported LBA-Change 00:08:56.413 Read (02h): Supported 00:08:56.413 Compare (05h): Supported 00:08:56.413 Write Zeroes (08h): Supported LBA-Change 00:08:56.413 Dataset Management (09h): Supported LBA-Change 00:08:56.413 Unknown (0Ch): Supported 00:08:56.413 Unknown (12h): Supported 00:08:56.413 Copy (19h): Supported LBA-Change 00:08:56.413 Unknown (1Dh): Supported LBA-Change 00:08:56.413 00:08:56.413 Error Log 00:08:56.413 ========= 00:08:56.413 00:08:56.413 Arbitration 00:08:56.413 =========== 00:08:56.413 Arbitration Burst: no limit 00:08:56.413 00:08:56.413 Power Management 00:08:56.413 ================ 00:08:56.413 Number of Power States: 1 00:08:56.413 Current Power State: Power State #0 00:08:56.413 Power State #0: 00:08:56.413 Max Power: 25.00 W 00:08:56.413 Non-Operational State: Operational 00:08:56.413 Entry Latency: 16 microseconds 00:08:56.413 Exit Latency: 4 microseconds 00:08:56.413 Relative Read Throughput: 0 00:08:56.413 Relative Read Latency: 0 00:08:56.413 Relative Write Throughput: 0 00:08:56.413 Relative Write Latency: 0 00:08:56.413 Idle Power: Not Reported 00:08:56.413 Active Power: Not Reported 00:08:56.413 Non-Operational Permissive Mode: Not Supported 00:08:56.413 00:08:56.413 Health Information 00:08:56.413 ================== 00:08:56.413 Critical Warnings: 00:08:56.413 Available Spare Space: OK 00:08:56.413 Temperature: OK 00:08:56.413 Device Reliability: OK 00:08:56.413 Read Only: No 00:08:56.413 Volatile Memory Backup: OK 00:08:56.413 Current Temperature: 323 Kelvin (50 Celsius) 00:08:56.413 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:56.413 Available Spare: 0% 00:08:56.413 Available Spare Threshold: 0% 00:08:56.413 Life Percentage Used: 0% 00:08:56.413 Data Units Read: 2067 00:08:56.413 Data Units Written: 1855 00:08:56.413 Host Read Commands: 97527 00:08:56.413 Host Write Commands: 95840 00:08:56.413 Controller Busy Time: 0 minutes 00:08:56.413 Power Cycles: 0 00:08:56.413 Power On Hours: 0 hours 00:08:56.413 Unsafe Shutdowns: 0 00:08:56.413 Unrecoverable Media Errors: 0 00:08:56.413 Lifetime Error Log Entries: 0 00:08:56.413 Warning Temperature Time: 0 minutes 00:08:56.413 Critical Temperature Time: 0 minutes 00:08:56.413 00:08:56.413 Number of Queues 00:08:56.413 ================ 00:08:56.413 Number of I/O Submission Queues: 64 00:08:56.413 Number of I/O Completion Queues: 64 00:08:56.413 00:08:56.413 ZNS Specific Controller Data 00:08:56.413 ============================ 00:08:56.413 Zone Append Size Limit: 0 00:08:56.413 00:08:56.413 00:08:56.413 Active Namespaces 00:08:56.413 ================= 00:08:56.413 Namespace ID:1 00:08:56.413 Error Recovery Timeout: Unlimited 00:08:56.413 Command Set Identifier: NVM (00h) 00:08:56.413 Deallocate: Supported 00:08:56.413 Deallocated/Unwritten Error: Supported 00:08:56.413 Deallocated Read Value: All 0x00 00:08:56.413 Deallocate in Write Zeroes: Not Supported 00:08:56.413 Deallocated Guard Field: 0xFFFF 00:08:56.413 Flush: Supported 00:08:56.413 Reservation: Not Supported 00:08:56.413 Namespace Sharing Capabilities: Private 00:08:56.413 Size (in LBAs): 1048576 (4GiB) 00:08:56.413 Capacity (in LBAs): 1048576 (4GiB) 00:08:56.413 Utilization (in LBAs): 1048576 (4GiB) 00:08:56.413 Thin Provisioning: Not Supported 00:08:56.413 Per-NS Atomic Units: No 00:08:56.413 Maximum Single Source Range Length: 128 00:08:56.413 Maximum Copy Length: 128 00:08:56.413 Maximum Source Range Count: 128 00:08:56.413 NGUID/EUI64 Never Reused: No 00:08:56.413 Namespace Write Protected: No 00:08:56.413 Number of LBA Formats: 8 00:08:56.413 Current LBA Format: LBA Format #04 00:08:56.413 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:56.413 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:56.413 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:56.413 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:56.413 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:56.413 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:56.413 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:56.413 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:56.413 00:08:56.413 NVM Specific Namespace Data 00:08:56.413 =========================== 00:08:56.413 Logical Block Storage Tag Mask: 0 00:08:56.413 Protection Information Capabilities: 00:08:56.413 16b Guard Protection Information Storage Tag Support: No 00:08:56.413 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:56.413 Storage Tag Check Read Support: No 00:08:56.413 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.413 Namespace ID:2 00:08:56.413 Error Recovery Timeout: Unlimited 00:08:56.413 Command Set Identifier: NVM (00h) 00:08:56.413 Deallocate: Supported 00:08:56.413 Deallocated/Unwritten Error: Supported 00:08:56.413 Deallocated Read Value: All 0x00 00:08:56.413 Deallocate in Write Zeroes: Not Supported 00:08:56.413 Deallocated Guard Field: 0xFFFF 00:08:56.413 Flush: Supported 00:08:56.413 Reservation: Not Supported 00:08:56.413 Namespace Sharing Capabilities: Private 00:08:56.413 Size (in LBAs): 1048576 (4GiB) 00:08:56.413 Capacity (in LBAs): 1048576 (4GiB) 00:08:56.414 Utilization (in LBAs): 1048576 (4GiB) 00:08:56.414 Thin Provisioning: Not Supported 00:08:56.414 Per-NS Atomic Units: No 00:08:56.414 Maximum Single Source Range Length: 128 00:08:56.414 Maximum Copy Length: 128 00:08:56.414 Maximum Source Range Count: 128 00:08:56.414 NGUID/EUI64 Never Reused: No 00:08:56.414 Namespace Write Protected: No 00:08:56.414 Number of LBA Formats: 8 00:08:56.414 Current LBA Format: LBA Format #04 00:08:56.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:56.414 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:56.414 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:56.414 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:56.414 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:56.414 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:56.414 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:56.414 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:56.414 00:08:56.414 NVM Specific Namespace Data 00:08:56.414 =========================== 00:08:56.414 Logical Block Storage Tag Mask: 0 00:08:56.414 Protection Information Capabilities: 00:08:56.414 16b Guard Protection Information Storage Tag Support: No 00:08:56.414 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:56.414 Storage Tag Check Read Support: No 00:08:56.414 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.414 Namespace ID:3 00:08:56.414 Error Recovery Timeout: Unlimited 00:08:56.414 Command Set Identifier: NVM (00h) 00:08:56.414 Deallocate: Supported 00:08:56.414 Deallocated/Unwritten Error: Supported 00:08:56.414 Deallocated Read Value: All 0x00 00:08:56.414 Deallocate in Write Zeroes: Not Supported 00:08:56.414 Deallocated Guard Field: 0xFFFF 00:08:56.414 Flush: Supported 00:08:56.414 Reservation: Not Supported 00:08:56.414 Namespace Sharing Capabilities: Private 00:08:56.414 Size (in LBAs): 1048576 (4GiB) 00:08:56.414 Capacity (in LBAs): 1048576 (4GiB) 00:08:56.414 Utilization (in LBAs): 1048576 (4GiB) 00:08:56.414 Thin Provisioning: Not Supported 00:08:56.414 Per-NS Atomic Units: No 00:08:56.414 Maximum Single Source Range Length: 128 00:08:56.414 Maximum Copy Length: 128 00:08:56.414 Maximum Source Range Count: 128 00:08:56.414 NGUID/EUI64 Never Reused: No 00:08:56.414 Namespace Write Protected: No 00:08:56.414 Number of LBA Formats: 8 00:08:56.414 Current LBA Format: LBA Format #04 00:08:56.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:56.414 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:56.414 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:56.414 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:56.414 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:56.414 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:56.414 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:56.414 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:56.414 00:08:56.414 NVM Specific Namespace Data 00:08:56.414 =========================== 00:08:56.414 Logical Block Storage Tag Mask: 0 00:08:56.414 Protection Information Capabilities: 00:08:56.414 16b Guard Protection Information Storage Tag Support: No 00:08:56.414 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:56.672 Storage Tag Check Read Support: No 00:08:56.672 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.672 07:48:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:56.672 07:48:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:56.930 ===================================================== 00:08:56.930 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.930 ===================================================== 00:08:56.930 Controller Capabilities/Features 00:08:56.930 ================================ 00:08:56.930 Vendor ID: 1b36 00:08:56.930 Subsystem Vendor ID: 1af4 00:08:56.930 Serial Number: 12343 00:08:56.930 Model Number: QEMU NVMe Ctrl 00:08:56.930 Firmware Version: 8.0.0 00:08:56.930 Recommended Arb Burst: 6 00:08:56.930 IEEE OUI Identifier: 00 54 52 00:08:56.930 Multi-path I/O 00:08:56.930 May have multiple subsystem ports: No 00:08:56.930 May have multiple controllers: Yes 00:08:56.930 Associated with SR-IOV VF: No 00:08:56.930 Max Data Transfer Size: 524288 00:08:56.930 Max Number of Namespaces: 256 00:08:56.930 Max Number of I/O Queues: 64 00:08:56.930 NVMe Specification Version (VS): 1.4 00:08:56.930 NVMe Specification Version (Identify): 1.4 00:08:56.930 Maximum Queue Entries: 2048 00:08:56.930 Contiguous Queues Required: Yes 00:08:56.930 Arbitration Mechanisms Supported 00:08:56.930 Weighted Round Robin: Not Supported 00:08:56.930 Vendor Specific: Not Supported 00:08:56.930 Reset Timeout: 7500 ms 00:08:56.930 Doorbell Stride: 4 bytes 00:08:56.930 NVM Subsystem Reset: Not Supported 00:08:56.930 Command Sets Supported 00:08:56.930 NVM Command Set: Supported 00:08:56.931 Boot Partition: Not Supported 00:08:56.931 Memory Page Size Minimum: 4096 bytes 00:08:56.931 Memory Page Size Maximum: 65536 bytes 00:08:56.931 Persistent Memory Region: Not Supported 00:08:56.931 Optional Asynchronous Events Supported 00:08:56.931 Namespace Attribute Notices: Supported 00:08:56.931 Firmware Activation Notices: Not Supported 00:08:56.931 ANA Change Notices: Not Supported 00:08:56.931 PLE Aggregate Log Change Notices: Not Supported 00:08:56.931 LBA Status Info Alert Notices: Not Supported 00:08:56.931 EGE Aggregate Log Change Notices: Not Supported 00:08:56.931 Normal NVM Subsystem Shutdown event: Not Supported 00:08:56.931 Zone Descriptor Change Notices: Not Supported 00:08:56.931 Discovery Log Change Notices: Not Supported 00:08:56.931 Controller Attributes 00:08:56.931 128-bit Host Identifier: Not Supported 00:08:56.931 Non-Operational Permissive Mode: Not Supported 00:08:56.931 NVM Sets: Not Supported 00:08:56.931 Read Recovery Levels: Not Supported 00:08:56.931 Endurance Groups: Supported 00:08:56.931 Predictable Latency Mode: Not Supported 00:08:56.931 Traffic Based Keep ALive: Not Supported 00:08:56.931 Namespace Granularity: Not Supported 00:08:56.931 SQ Associations: Not Supported 00:08:56.931 UUID List: Not Supported 00:08:56.931 Multi-Domain Subsystem: Not Supported 00:08:56.931 Fixed Capacity Management: Not Supported 00:08:56.931 Variable Capacity Management: Not Supported 00:08:56.931 Delete Endurance Group: Not Supported 00:08:56.931 Delete NVM Set: Not Supported 00:08:56.931 Extended LBA Formats Supported: Supported 00:08:56.931 Flexible Data Placement Supported: Supported 00:08:56.931 00:08:56.931 Controller Memory Buffer Support 00:08:56.931 ================================ 00:08:56.931 Supported: No 00:08:56.931 00:08:56.931 Persistent Memory Region Support 00:08:56.931 ================================ 00:08:56.931 Supported: No 00:08:56.931 00:08:56.931 Admin Command Set Attributes 00:08:56.931 ============================ 00:08:56.931 Security Send/Receive: Not Supported 00:08:56.931 Format NVM: Supported 00:08:56.931 Firmware Activate/Download: Not Supported 00:08:56.931 Namespace Management: Supported 00:08:56.931 Device Self-Test: Not Supported 00:08:56.931 Directives: Supported 00:08:56.931 NVMe-MI: Not Supported 00:08:56.931 Virtualization Management: Not Supported 00:08:56.931 Doorbell Buffer Config: Supported 00:08:56.931 Get LBA Status Capability: Not Supported 00:08:56.931 Command & Feature Lockdown Capability: Not Supported 00:08:56.931 Abort Command Limit: 4 00:08:56.931 Async Event Request Limit: 4 00:08:56.931 Number of Firmware Slots: N/A 00:08:56.931 Firmware Slot 1 Read-Only: N/A 00:08:56.931 Firmware Activation Without Reset: N/A 00:08:56.931 Multiple Update Detection Support: N/A 00:08:56.931 Firmware Update Granularity: No Information Provided 00:08:56.931 Per-Namespace SMART Log: Yes 00:08:56.931 Asymmetric Namespace Access Log Page: Not Supported 00:08:56.931 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:56.931 Command Effects Log Page: Supported 00:08:56.931 Get Log Page Extended Data: Supported 00:08:56.931 Telemetry Log Pages: Not Supported 00:08:56.931 Persistent Event Log Pages: Not Supported 00:08:56.931 Supported Log Pages Log Page: May Support 00:08:56.931 Commands Supported & Effects Log Page: Not Supported 00:08:56.931 Feature Identifiers & Effects Log Page:May Support 00:08:56.931 NVMe-MI Commands & Effects Log Page: May Support 00:08:56.931 Data Area 4 for Telemetry Log: Not Supported 00:08:56.931 Error Log Page Entries Supported: 1 00:08:56.931 Keep Alive: Not Supported 00:08:56.931 00:08:56.931 NVM Command Set Attributes 00:08:56.931 ========================== 00:08:56.931 Submission Queue Entry Size 00:08:56.931 Max: 64 00:08:56.931 Min: 64 00:08:56.931 Completion Queue Entry Size 00:08:56.931 Max: 16 00:08:56.931 Min: 16 00:08:56.931 Number of Namespaces: 256 00:08:56.931 Compare Command: Supported 00:08:56.931 Write Uncorrectable Command: Not Supported 00:08:56.931 Dataset Management Command: Supported 00:08:56.931 Write Zeroes Command: Supported 00:08:56.931 Set Features Save Field: Supported 00:08:56.931 Reservations: Not Supported 00:08:56.931 Timestamp: Supported 00:08:56.931 Copy: Supported 00:08:56.931 Volatile Write Cache: Present 00:08:56.931 Atomic Write Unit (Normal): 1 00:08:56.931 Atomic Write Unit (PFail): 1 00:08:56.931 Atomic Compare & Write Unit: 1 00:08:56.931 Fused Compare & Write: Not Supported 00:08:56.931 Scatter-Gather List 00:08:56.931 SGL Command Set: Supported 00:08:56.931 SGL Keyed: Not Supported 00:08:56.931 SGL Bit Bucket Descriptor: Not Supported 00:08:56.931 SGL Metadata Pointer: Not Supported 00:08:56.931 Oversized SGL: Not Supported 00:08:56.931 SGL Metadata Address: Not Supported 00:08:56.931 SGL Offset: Not Supported 00:08:56.931 Transport SGL Data Block: Not Supported 00:08:56.931 Replay Protected Memory Block: Not Supported 00:08:56.931 00:08:56.931 Firmware Slot Information 00:08:56.931 ========================= 00:08:56.931 Active slot: 1 00:08:56.931 Slot 1 Firmware Revision: 1.0 00:08:56.931 00:08:56.931 00:08:56.931 Commands Supported and Effects 00:08:56.931 ============================== 00:08:56.931 Admin Commands 00:08:56.931 -------------- 00:08:56.931 Delete I/O Submission Queue (00h): Supported 00:08:56.931 Create I/O Submission Queue (01h): Supported 00:08:56.931 Get Log Page (02h): Supported 00:08:56.931 Delete I/O Completion Queue (04h): Supported 00:08:56.931 Create I/O Completion Queue (05h): Supported 00:08:56.931 Identify (06h): Supported 00:08:56.931 Abort (08h): Supported 00:08:56.931 Set Features (09h): Supported 00:08:56.931 Get Features (0Ah): Supported 00:08:56.931 Asynchronous Event Request (0Ch): Supported 00:08:56.931 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:56.931 Directive Send (19h): Supported 00:08:56.931 Directive Receive (1Ah): Supported 00:08:56.931 Virtualization Management (1Ch): Supported 00:08:56.931 Doorbell Buffer Config (7Ch): Supported 00:08:56.931 Format NVM (80h): Supported LBA-Change 00:08:56.931 I/O Commands 00:08:56.931 ------------ 00:08:56.931 Flush (00h): Supported LBA-Change 00:08:56.931 Write (01h): Supported LBA-Change 00:08:56.931 Read (02h): Supported 00:08:56.931 Compare (05h): Supported 00:08:56.931 Write Zeroes (08h): Supported LBA-Change 00:08:56.931 Dataset Management (09h): Supported LBA-Change 00:08:56.931 Unknown (0Ch): Supported 00:08:56.931 Unknown (12h): Supported 00:08:56.931 Copy (19h): Supported LBA-Change 00:08:56.931 Unknown (1Dh): Supported LBA-Change 00:08:56.931 00:08:56.931 Error Log 00:08:56.931 ========= 00:08:56.931 00:08:56.931 Arbitration 00:08:56.931 =========== 00:08:56.931 Arbitration Burst: no limit 00:08:56.931 00:08:56.931 Power Management 00:08:56.931 ================ 00:08:56.931 Number of Power States: 1 00:08:56.931 Current Power State: Power State #0 00:08:56.931 Power State #0: 00:08:56.931 Max Power: 25.00 W 00:08:56.931 Non-Operational State: Operational 00:08:56.931 Entry Latency: 16 microseconds 00:08:56.931 Exit Latency: 4 microseconds 00:08:56.931 Relative Read Throughput: 0 00:08:56.931 Relative Read Latency: 0 00:08:56.931 Relative Write Throughput: 0 00:08:56.931 Relative Write Latency: 0 00:08:56.931 Idle Power: Not Reported 00:08:56.931 Active Power: Not Reported 00:08:56.931 Non-Operational Permissive Mode: Not Supported 00:08:56.931 00:08:56.931 Health Information 00:08:56.931 ================== 00:08:56.931 Critical Warnings: 00:08:56.931 Available Spare Space: OK 00:08:56.931 Temperature: OK 00:08:56.931 Device Reliability: OK 00:08:56.931 Read Only: No 00:08:56.931 Volatile Memory Backup: OK 00:08:56.931 Current Temperature: 323 Kelvin (50 Celsius) 00:08:56.931 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:56.931 Available Spare: 0% 00:08:56.931 Available Spare Threshold: 0% 00:08:56.931 Life Percentage Used: 0% 00:08:56.931 Data Units Read: 775 00:08:56.931 Data Units Written: 704 00:08:56.931 Host Read Commands: 33313 00:08:56.931 Host Write Commands: 32737 00:08:56.931 Controller Busy Time: 0 minutes 00:08:56.931 Power Cycles: 0 00:08:56.931 Power On Hours: 0 hours 00:08:56.931 Unsafe Shutdowns: 0 00:08:56.931 Unrecoverable Media Errors: 0 00:08:56.931 Lifetime Error Log Entries: 0 00:08:56.931 Warning Temperature Time: 0 minutes 00:08:56.931 Critical Temperature Time: 0 minutes 00:08:56.931 00:08:56.931 Number of Queues 00:08:56.931 ================ 00:08:56.931 Number of I/O Submission Queues: 64 00:08:56.931 Number of I/O Completion Queues: 64 00:08:56.931 00:08:56.931 ZNS Specific Controller Data 00:08:56.931 ============================ 00:08:56.931 Zone Append Size Limit: 0 00:08:56.931 00:08:56.931 00:08:56.931 Active Namespaces 00:08:56.931 ================= 00:08:56.931 Namespace ID:1 00:08:56.931 Error Recovery Timeout: Unlimited 00:08:56.931 Command Set Identifier: NVM (00h) 00:08:56.931 Deallocate: Supported 00:08:56.931 Deallocated/Unwritten Error: Supported 00:08:56.931 Deallocated Read Value: All 0x00 00:08:56.931 Deallocate in Write Zeroes: Not Supported 00:08:56.931 Deallocated Guard Field: 0xFFFF 00:08:56.932 Flush: Supported 00:08:56.932 Reservation: Not Supported 00:08:56.932 Namespace Sharing Capabilities: Multiple Controllers 00:08:56.932 Size (in LBAs): 262144 (1GiB) 00:08:56.932 Capacity (in LBAs): 262144 (1GiB) 00:08:56.932 Utilization (in LBAs): 262144 (1GiB) 00:08:56.932 Thin Provisioning: Not Supported 00:08:56.932 Per-NS Atomic Units: No 00:08:56.932 Maximum Single Source Range Length: 128 00:08:56.932 Maximum Copy Length: 128 00:08:56.932 Maximum Source Range Count: 128 00:08:56.932 NGUID/EUI64 Never Reused: No 00:08:56.932 Namespace Write Protected: No 00:08:56.932 Endurance group ID: 1 00:08:56.932 Number of LBA Formats: 8 00:08:56.932 Current LBA Format: LBA Format #04 00:08:56.932 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:56.932 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:56.932 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:56.932 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:56.932 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:56.932 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:56.932 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:56.932 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:56.932 00:08:56.932 Get Feature FDP: 00:08:56.932 ================ 00:08:56.932 Enabled: Yes 00:08:56.932 FDP configuration index: 0 00:08:56.932 00:08:56.932 FDP configurations log page 00:08:56.932 =========================== 00:08:56.932 Number of FDP configurations: 1 00:08:56.932 Version: 0 00:08:56.932 Size: 112 00:08:56.932 FDP Configuration Descriptor: 0 00:08:56.932 Descriptor Size: 96 00:08:56.932 Reclaim Group Identifier format: 2 00:08:56.932 FDP Volatile Write Cache: Not Present 00:08:56.932 FDP Configuration: Valid 00:08:56.932 Vendor Specific Size: 0 00:08:56.932 Number of Reclaim Groups: 2 00:08:56.932 Number of Recalim Unit Handles: 8 00:08:56.932 Max Placement Identifiers: 128 00:08:56.932 Number of Namespaces Suppprted: 256 00:08:56.932 Reclaim unit Nominal Size: 6000000 bytes 00:08:56.932 Estimated Reclaim Unit Time Limit: Not Reported 00:08:56.932 RUH Desc #000: RUH Type: Initially Isolated 00:08:56.932 RUH Desc #001: RUH Type: Initially Isolated 00:08:56.932 RUH Desc #002: RUH Type: Initially Isolated 00:08:56.932 RUH Desc #003: RUH Type: Initially Isolated 00:08:56.932 RUH Desc #004: RUH Type: Initially Isolated 00:08:56.932 RUH Desc #005: RUH Type: Initially Isolated 00:08:56.932 RUH Desc #006: RUH Type: Initially Isolated 00:08:56.932 RUH Desc #007: RUH Type: Initially Isolated 00:08:56.932 00:08:56.932 FDP reclaim unit handle usage log page 00:08:56.932 ====================================== 00:08:56.932 Number of Reclaim Unit Handles: 8 00:08:56.932 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:56.932 RUH Usage Desc #001: RUH Attributes: Unused 00:08:56.932 RUH Usage Desc #002: RUH Attributes: Unused 00:08:56.932 RUH Usage Desc #003: RUH Attributes: Unused 00:08:56.932 RUH Usage Desc #004: RUH Attributes: Unused 00:08:56.932 RUH Usage Desc #005: RUH Attributes: Unused 00:08:56.932 RUH Usage Desc #006: RUH Attributes: Unused 00:08:56.932 RUH Usage Desc #007: RUH Attributes: Unused 00:08:56.932 00:08:56.932 FDP statistics log page 00:08:56.932 ======================= 00:08:56.932 Host bytes with metadata written: 417767424 00:08:56.932 Media bytes with metadata written: 417832960 00:08:56.932 Media bytes erased: 0 00:08:56.932 00:08:56.932 FDP events log page 00:08:56.932 =================== 00:08:56.932 Number of FDP events: 0 00:08:56.932 00:08:56.932 NVM Specific Namespace Data 00:08:56.932 =========================== 00:08:56.932 Logical Block Storage Tag Mask: 0 00:08:56.932 Protection Information Capabilities: 00:08:56.932 16b Guard Protection Information Storage Tag Support: No 00:08:56.932 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:56.932 Storage Tag Check Read Support: No 00:08:56.932 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:56.932 00:08:56.932 real 0m1.885s 00:08:56.932 user 0m0.730s 00:08:56.932 sys 0m0.955s 00:08:56.932 07:48:58 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.932 07:48:58 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:56.932 ************************************ 00:08:56.932 END TEST nvme_identify 00:08:56.932 ************************************ 00:08:56.932 07:48:58 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:56.932 07:48:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.932 07:48:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.932 07:48:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.932 ************************************ 00:08:56.932 START TEST nvme_perf 00:08:56.932 ************************************ 00:08:56.932 07:48:58 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:56.932 07:48:58 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:58.345 Initializing NVMe Controllers 00:08:58.345 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.345 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.345 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.345 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.345 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:58.345 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:58.345 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:58.345 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:58.345 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:58.345 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:58.345 Initialization complete. Launching workers. 00:08:58.345 ======================================================== 00:08:58.345 Latency(us) 00:08:58.345 Device Information : IOPS MiB/s Average min max 00:08:58.345 PCIE (0000:00:10.0) NSID 1 from core 0: 11452.55 134.21 11202.58 7240.86 45040.28 00:08:58.345 PCIE (0000:00:11.0) NSID 1 from core 0: 11452.55 134.21 11171.89 7339.98 41485.46 00:08:58.345 PCIE (0000:00:13.0) NSID 1 from core 0: 11452.55 134.21 11137.02 7343.00 39167.10 00:08:58.345 PCIE (0000:00:12.0) NSID 1 from core 0: 11452.55 134.21 11102.07 7300.95 35436.46 00:08:58.345 PCIE (0000:00:12.0) NSID 2 from core 0: 11452.55 134.21 11067.74 7287.95 31858.96 00:08:58.345 PCIE (0000:00:12.0) NSID 3 from core 0: 11452.55 134.21 11031.37 7328.48 28206.24 00:08:58.345 ======================================================== 00:08:58.345 Total : 68715.32 805.26 11118.78 7240.86 45040.28 00:08:58.345 00:08:58.345 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:58.345 ================================================================================= 00:08:58.345 1.00000% : 7566.429us 00:08:58.345 10.00000% : 8460.102us 00:08:58.345 25.00000% : 9472.931us 00:08:58.345 50.00000% : 11081.542us 00:08:58.345 75.00000% : 11915.636us 00:08:58.345 90.00000% : 13464.669us 00:08:58.345 95.00000% : 14179.607us 00:08:58.345 98.00000% : 16205.265us 00:08:58.345 99.00000% : 33363.782us 00:08:58.345 99.50000% : 42419.665us 00:08:58.345 99.90000% : 44564.480us 00:08:58.345 99.99000% : 45041.105us 00:08:58.345 99.99900% : 45041.105us 00:08:58.345 99.99990% : 45041.105us 00:08:58.345 99.99999% : 45041.105us 00:08:58.345 00:08:58.345 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:58.345 ================================================================================= 00:08:58.345 1.00000% : 7685.585us 00:08:58.345 10.00000% : 8519.680us 00:08:58.345 25.00000% : 9472.931us 00:08:58.345 50.00000% : 11081.542us 00:08:58.345 75.00000% : 11915.636us 00:08:58.345 90.00000% : 13464.669us 00:08:58.345 95.00000% : 14179.607us 00:08:58.345 98.00000% : 16205.265us 00:08:58.345 99.00000% : 30504.029us 00:08:58.345 99.50000% : 39083.287us 00:08:58.345 99.90000% : 41228.102us 00:08:58.345 99.99000% : 41466.415us 00:08:58.345 99.99900% : 41704.727us 00:08:58.345 99.99990% : 41704.727us 00:08:58.345 99.99999% : 41704.727us 00:08:58.345 00:08:58.345 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:58.345 ================================================================================= 00:08:58.345 1.00000% : 7626.007us 00:08:58.345 10.00000% : 8519.680us 00:08:58.345 25.00000% : 9532.509us 00:08:58.345 50.00000% : 11141.120us 00:08:58.345 75.00000% : 11856.058us 00:08:58.345 90.00000% : 13405.091us 00:08:58.345 95.00000% : 14060.451us 00:08:58.345 98.00000% : 15966.953us 00:08:58.345 99.00000% : 27405.964us 00:08:58.345 99.50000% : 36700.160us 00:08:58.345 99.90000% : 38844.975us 00:08:58.345 99.99000% : 39321.600us 00:08:58.345 99.99900% : 39321.600us 00:08:58.345 99.99990% : 39321.600us 00:08:58.345 99.99999% : 39321.600us 00:08:58.345 00:08:58.345 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:58.345 ================================================================================= 00:08:58.345 1.00000% : 7685.585us 00:08:58.345 10.00000% : 8519.680us 00:08:58.345 25.00000% : 9472.931us 00:08:58.345 50.00000% : 11141.120us 00:08:58.345 75.00000% : 11856.058us 00:08:58.345 90.00000% : 13464.669us 00:08:58.345 95.00000% : 14179.607us 00:08:58.345 98.00000% : 16086.109us 00:08:58.345 99.00000% : 23831.273us 00:08:58.345 99.50000% : 33125.469us 00:08:58.345 99.90000% : 35031.971us 00:08:58.345 99.99000% : 35508.596us 00:08:58.345 99.99900% : 35508.596us 00:08:58.345 99.99990% : 35508.596us 00:08:58.345 99.99999% : 35508.596us 00:08:58.345 00:08:58.345 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:58.345 ================================================================================= 00:08:58.345 1.00000% : 7685.585us 00:08:58.345 10.00000% : 8519.680us 00:08:58.345 25.00000% : 9472.931us 00:08:58.345 50.00000% : 11141.120us 00:08:58.345 75.00000% : 11856.058us 00:08:58.345 90.00000% : 13464.669us 00:08:58.345 95.00000% : 14179.607us 00:08:58.345 98.00000% : 16205.265us 00:08:58.345 99.00000% : 20256.582us 00:08:58.345 99.50000% : 29312.465us 00:08:58.345 99.90000% : 31457.280us 00:08:58.345 99.99000% : 31933.905us 00:08:58.345 99.99900% : 31933.905us 00:08:58.345 99.99990% : 31933.905us 00:08:58.345 99.99999% : 31933.905us 00:08:58.345 00:08:58.345 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:58.345 ================================================================================= 00:08:58.345 1.00000% : 7626.007us 00:08:58.345 10.00000% : 8519.680us 00:08:58.345 25.00000% : 9472.931us 00:08:58.345 50.00000% : 11141.120us 00:08:58.345 75.00000% : 11856.058us 00:08:58.345 90.00000% : 13464.669us 00:08:58.345 95.00000% : 14120.029us 00:08:58.345 98.00000% : 16324.422us 00:08:58.345 99.00000% : 17039.360us 00:08:58.345 99.50000% : 25618.618us 00:08:58.345 99.90000% : 27763.433us 00:08:58.345 99.99000% : 28240.058us 00:08:58.345 99.99900% : 28240.058us 00:08:58.345 99.99990% : 28240.058us 00:08:58.345 99.99999% : 28240.058us 00:08:58.345 00:08:58.345 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:58.345 ============================================================================== 00:08:58.345 Range in us Cumulative IO count 00:08:58.345 7238.749 - 7268.538: 0.0262% ( 3) 00:08:58.345 7268.538 - 7298.327: 0.0524% ( 3) 00:08:58.345 7298.327 - 7328.116: 0.1309% ( 9) 00:08:58.345 7328.116 - 7357.905: 0.1833% ( 6) 00:08:58.345 7357.905 - 7387.695: 0.2793% ( 11) 00:08:58.345 7387.695 - 7417.484: 0.3579% ( 9) 00:08:58.345 7417.484 - 7447.273: 0.4976% ( 16) 00:08:58.345 7447.273 - 7477.062: 0.6110% ( 13) 00:08:58.345 7477.062 - 7506.851: 0.7594% ( 17) 00:08:58.345 7506.851 - 7536.640: 0.8729% ( 13) 00:08:58.345 7536.640 - 7566.429: 1.0388% ( 19) 00:08:58.345 7566.429 - 7596.218: 1.1522% ( 13) 00:08:58.345 7596.218 - 7626.007: 1.3443% ( 22) 00:08:58.345 7626.007 - 7685.585: 1.7458% ( 46) 00:08:58.345 7685.585 - 7745.164: 2.1910% ( 51) 00:08:58.345 7745.164 - 7804.742: 2.6885% ( 57) 00:08:58.345 7804.742 - 7864.320: 3.1337% ( 51) 00:08:58.345 7864.320 - 7923.898: 3.6226% ( 56) 00:08:58.345 7923.898 - 7983.476: 4.0939% ( 54) 00:08:58.345 7983.476 - 8043.055: 4.6700% ( 66) 00:08:58.345 8043.055 - 8102.633: 5.2112% ( 62) 00:08:58.345 8102.633 - 8162.211: 5.9096% ( 80) 00:08:58.345 8162.211 - 8221.789: 6.5992% ( 79) 00:08:58.345 8221.789 - 8281.367: 7.3237% ( 83) 00:08:58.345 8281.367 - 8340.945: 8.2664% ( 108) 00:08:58.345 8340.945 - 8400.524: 9.2004% ( 107) 00:08:58.345 8400.524 - 8460.102: 10.2130% ( 116) 00:08:58.345 8460.102 - 8519.680: 11.2343% ( 117) 00:08:58.345 8519.680 - 8579.258: 12.2294% ( 114) 00:08:58.345 8579.258 - 8638.836: 13.1547% ( 106) 00:08:58.345 8638.836 - 8698.415: 14.1149% ( 110) 00:08:58.345 8698.415 - 8757.993: 15.0663% ( 109) 00:08:58.345 8757.993 - 8817.571: 15.9742% ( 104) 00:08:58.345 8817.571 - 8877.149: 16.8471% ( 100) 00:08:58.345 8877.149 - 8936.727: 17.6152% ( 88) 00:08:58.345 8936.727 - 8996.305: 18.4532% ( 96) 00:08:58.345 8996.305 - 9055.884: 19.2912% ( 96) 00:08:58.345 9055.884 - 9115.462: 20.1205% ( 95) 00:08:58.345 9115.462 - 9175.040: 20.9759% ( 98) 00:08:58.345 9175.040 - 9234.618: 21.8575% ( 101) 00:08:58.345 9234.618 - 9294.196: 22.6344% ( 89) 00:08:58.345 9294.196 - 9353.775: 23.4637% ( 95) 00:08:58.345 9353.775 - 9413.353: 24.3453% ( 101) 00:08:58.345 9413.353 - 9472.931: 25.1484% ( 92) 00:08:58.345 9472.931 - 9532.509: 26.0126% ( 99) 00:08:58.345 9532.509 - 9592.087: 26.8680% ( 98) 00:08:58.345 9592.087 - 9651.665: 27.7409% ( 100) 00:08:58.345 9651.665 - 9711.244: 28.4131% ( 77) 00:08:58.345 9711.244 - 9770.822: 29.1725% ( 87) 00:08:58.345 9770.822 - 9830.400: 29.6875% ( 59) 00:08:58.345 9830.400 - 9889.978: 30.1938% ( 58) 00:08:58.345 9889.978 - 9949.556: 30.7175% ( 60) 00:08:58.345 9949.556 - 10009.135: 31.1802% ( 53) 00:08:58.345 10009.135 - 10068.713: 31.6166% ( 50) 00:08:58.345 10068.713 - 10128.291: 32.1142% ( 57) 00:08:58.345 10128.291 - 10187.869: 32.6554% ( 62) 00:08:58.345 10187.869 - 10247.447: 33.1878% ( 61) 00:08:58.345 10247.447 - 10307.025: 33.6941% ( 58) 00:08:58.345 10307.025 - 10366.604: 34.2528% ( 64) 00:08:58.346 10366.604 - 10426.182: 34.9249% ( 77) 00:08:58.346 10426.182 - 10485.760: 35.8328% ( 104) 00:08:58.346 10485.760 - 10545.338: 36.8890% ( 121) 00:08:58.346 10545.338 - 10604.916: 38.0150% ( 129) 00:08:58.346 10604.916 - 10664.495: 39.2545% ( 142) 00:08:58.346 10664.495 - 10724.073: 40.6774% ( 163) 00:08:58.346 10724.073 - 10783.651: 42.1264% ( 166) 00:08:58.346 10783.651 - 10843.229: 43.7238% ( 183) 00:08:58.346 10843.229 - 10902.807: 45.4172% ( 194) 00:08:58.346 10902.807 - 10962.385: 47.1980% ( 204) 00:08:58.346 10962.385 - 11021.964: 49.0660% ( 214) 00:08:58.346 11021.964 - 11081.542: 50.8293% ( 202) 00:08:58.346 11081.542 - 11141.120: 52.6449% ( 208) 00:08:58.346 11141.120 - 11200.698: 54.3383% ( 194) 00:08:58.346 11200.698 - 11260.276: 56.2238% ( 216) 00:08:58.346 11260.276 - 11319.855: 58.0395% ( 208) 00:08:58.346 11319.855 - 11379.433: 59.7853% ( 200) 00:08:58.346 11379.433 - 11439.011: 61.5485% ( 202) 00:08:58.346 11439.011 - 11498.589: 63.3031% ( 201) 00:08:58.346 11498.589 - 11558.167: 65.1798% ( 215) 00:08:58.346 11558.167 - 11617.745: 67.0740% ( 217) 00:08:58.346 11617.745 - 11677.324: 68.9682% ( 217) 00:08:58.346 11677.324 - 11736.902: 70.8013% ( 210) 00:08:58.346 11736.902 - 11796.480: 72.4511% ( 189) 00:08:58.346 11796.480 - 11856.058: 74.0311% ( 181) 00:08:58.346 11856.058 - 11915.636: 75.3492% ( 151) 00:08:58.346 11915.636 - 11975.215: 76.6323% ( 147) 00:08:58.346 11975.215 - 12034.793: 77.9155% ( 147) 00:08:58.346 12034.793 - 12094.371: 78.9368% ( 117) 00:08:58.346 12094.371 - 12153.949: 79.9232% ( 113) 00:08:58.346 12153.949 - 12213.527: 80.6913% ( 88) 00:08:58.346 12213.527 - 12273.105: 81.3984% ( 81) 00:08:58.346 12273.105 - 12332.684: 81.9658% ( 65) 00:08:58.346 12332.684 - 12392.262: 82.6117% ( 74) 00:08:58.346 12392.262 - 12451.840: 83.0656% ( 52) 00:08:58.346 12451.840 - 12511.418: 83.4846% ( 48) 00:08:58.346 12511.418 - 12570.996: 83.9385% ( 52) 00:08:58.346 12570.996 - 12630.575: 84.3052% ( 42) 00:08:58.346 12630.575 - 12690.153: 84.7242% ( 48) 00:08:58.346 12690.153 - 12749.731: 85.1082% ( 44) 00:08:58.346 12749.731 - 12809.309: 85.5360% ( 49) 00:08:58.346 12809.309 - 12868.887: 85.9113% ( 43) 00:08:58.346 12868.887 - 12928.465: 86.2954% ( 44) 00:08:58.346 12928.465 - 12988.044: 86.6969% ( 46) 00:08:58.346 12988.044 - 13047.622: 87.1334% ( 50) 00:08:58.346 13047.622 - 13107.200: 87.6222% ( 56) 00:08:58.346 13107.200 - 13166.778: 88.1198% ( 57) 00:08:58.346 13166.778 - 13226.356: 88.5649% ( 51) 00:08:58.346 13226.356 - 13285.935: 89.0450% ( 55) 00:08:58.346 13285.935 - 13345.513: 89.5339% ( 56) 00:08:58.346 13345.513 - 13405.091: 89.9179% ( 44) 00:08:58.346 13405.091 - 13464.669: 90.3893% ( 54) 00:08:58.346 13464.669 - 13524.247: 90.8258% ( 50) 00:08:58.346 13524.247 - 13583.825: 91.3233% ( 57) 00:08:58.346 13583.825 - 13643.404: 91.8122% ( 56) 00:08:58.346 13643.404 - 13702.982: 92.2748% ( 53) 00:08:58.346 13702.982 - 13762.560: 92.7025% ( 49) 00:08:58.346 13762.560 - 13822.138: 93.1041% ( 46) 00:08:58.346 13822.138 - 13881.716: 93.5318% ( 49) 00:08:58.346 13881.716 - 13941.295: 93.8635% ( 38) 00:08:58.346 13941.295 - 14000.873: 94.2214% ( 41) 00:08:58.346 14000.873 - 14060.451: 94.5356% ( 36) 00:08:58.346 14060.451 - 14120.029: 94.8586% ( 37) 00:08:58.346 14120.029 - 14179.607: 95.1641% ( 35) 00:08:58.346 14179.607 - 14239.185: 95.4347% ( 31) 00:08:58.346 14239.185 - 14298.764: 95.7140% ( 32) 00:08:58.346 14298.764 - 14358.342: 95.9934% ( 32) 00:08:58.346 14358.342 - 14417.920: 96.2378% ( 28) 00:08:58.346 14417.920 - 14477.498: 96.4560% ( 25) 00:08:58.346 14477.498 - 14537.076: 96.6219% ( 19) 00:08:58.346 14537.076 - 14596.655: 96.7528% ( 15) 00:08:58.346 14596.655 - 14656.233: 96.8488% ( 11) 00:08:58.346 14656.233 - 14715.811: 96.9361% ( 10) 00:08:58.346 14715.811 - 14775.389: 97.0147% ( 9) 00:08:58.346 14775.389 - 14834.967: 97.0932% ( 9) 00:08:58.346 14834.967 - 14894.545: 97.1805% ( 10) 00:08:58.346 14894.545 - 14954.124: 97.2154% ( 4) 00:08:58.346 14954.124 - 15013.702: 97.2503% ( 4) 00:08:58.346 15013.702 - 15073.280: 97.2591% ( 1) 00:08:58.346 15073.280 - 15132.858: 97.3027% ( 5) 00:08:58.346 15132.858 - 15192.436: 97.3202% ( 2) 00:08:58.346 15192.436 - 15252.015: 97.3289% ( 1) 00:08:58.346 15252.015 - 15371.171: 97.3638% ( 4) 00:08:58.346 15371.171 - 15490.327: 97.4162% ( 6) 00:08:58.346 15490.327 - 15609.484: 97.5122% ( 11) 00:08:58.346 15609.484 - 15728.640: 97.6432% ( 15) 00:08:58.346 15728.640 - 15847.796: 97.7479% ( 12) 00:08:58.346 15847.796 - 15966.953: 97.8614% ( 13) 00:08:58.346 15966.953 - 16086.109: 97.9749% ( 13) 00:08:58.346 16086.109 - 16205.265: 98.0971% ( 14) 00:08:58.346 16205.265 - 16324.422: 98.2105% ( 13) 00:08:58.346 16324.422 - 16443.578: 98.3328% ( 14) 00:08:58.346 16443.578 - 16562.735: 98.4637% ( 15) 00:08:58.346 16562.735 - 16681.891: 98.5684% ( 12) 00:08:58.346 16681.891 - 16801.047: 98.6645% ( 11) 00:08:58.346 16801.047 - 16920.204: 98.7256% ( 7) 00:08:58.346 16920.204 - 17039.360: 98.8128% ( 10) 00:08:58.346 17039.360 - 17158.516: 98.8652% ( 6) 00:08:58.346 17158.516 - 17277.673: 98.8827% ( 2) 00:08:58.346 32410.531 - 32648.844: 98.9001% ( 2) 00:08:58.346 32648.844 - 32887.156: 98.9438% ( 5) 00:08:58.346 32887.156 - 33125.469: 98.9874% ( 5) 00:08:58.346 33125.469 - 33363.782: 99.0311% ( 5) 00:08:58.346 33363.782 - 33602.095: 99.0747% ( 5) 00:08:58.346 33602.095 - 33840.407: 99.1184% ( 5) 00:08:58.346 33840.407 - 34078.720: 99.1620% ( 5) 00:08:58.346 34078.720 - 34317.033: 99.1969% ( 4) 00:08:58.346 34317.033 - 34555.345: 99.2580% ( 7) 00:08:58.346 34555.345 - 34793.658: 99.3017% ( 5) 00:08:58.346 34793.658 - 35031.971: 99.3453% ( 5) 00:08:58.346 35031.971 - 35270.284: 99.3890% ( 5) 00:08:58.346 35270.284 - 35508.596: 99.4326% ( 5) 00:08:58.346 35508.596 - 35746.909: 99.4413% ( 1) 00:08:58.346 41943.040 - 42181.353: 99.4675% ( 3) 00:08:58.346 42181.353 - 42419.665: 99.5112% ( 5) 00:08:58.346 42419.665 - 42657.978: 99.5635% ( 6) 00:08:58.346 42657.978 - 42896.291: 99.5985% ( 4) 00:08:58.346 42896.291 - 43134.604: 99.6508% ( 6) 00:08:58.346 43134.604 - 43372.916: 99.7032% ( 6) 00:08:58.346 43372.916 - 43611.229: 99.7469% ( 5) 00:08:58.346 43611.229 - 43849.542: 99.7818% ( 4) 00:08:58.346 43849.542 - 44087.855: 99.8341% ( 6) 00:08:58.346 44087.855 - 44326.167: 99.8778% ( 5) 00:08:58.346 44326.167 - 44564.480: 99.9302% ( 6) 00:08:58.346 44564.480 - 44802.793: 99.9651% ( 4) 00:08:58.346 44802.793 - 45041.105: 100.0000% ( 4) 00:08:58.346 00:08:58.346 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:58.346 ============================================================================== 00:08:58.346 Range in us Cumulative IO count 00:08:58.346 7328.116 - 7357.905: 0.0262% ( 3) 00:08:58.346 7357.905 - 7387.695: 0.0524% ( 3) 00:08:58.346 7387.695 - 7417.484: 0.1047% ( 6) 00:08:58.346 7417.484 - 7447.273: 0.1659% ( 7) 00:08:58.346 7447.273 - 7477.062: 0.2444% ( 9) 00:08:58.346 7477.062 - 7506.851: 0.3579% ( 13) 00:08:58.346 7506.851 - 7536.640: 0.4714% ( 13) 00:08:58.346 7536.640 - 7566.429: 0.6198% ( 17) 00:08:58.346 7566.429 - 7596.218: 0.7856% ( 19) 00:08:58.346 7596.218 - 7626.007: 0.9515% ( 19) 00:08:58.346 7626.007 - 7685.585: 1.3443% ( 45) 00:08:58.346 7685.585 - 7745.164: 1.7458% ( 46) 00:08:58.346 7745.164 - 7804.742: 2.2172% ( 54) 00:08:58.346 7804.742 - 7864.320: 2.7584% ( 62) 00:08:58.346 7864.320 - 7923.898: 3.3520% ( 68) 00:08:58.346 7923.898 - 7983.476: 3.9455% ( 68) 00:08:58.346 7983.476 - 8043.055: 4.5304% ( 67) 00:08:58.346 8043.055 - 8102.633: 5.2025% ( 77) 00:08:58.346 8102.633 - 8162.211: 5.8572% ( 75) 00:08:58.346 8162.211 - 8221.789: 6.5555% ( 80) 00:08:58.346 8221.789 - 8281.367: 7.2277% ( 77) 00:08:58.346 8281.367 - 8340.945: 8.0569% ( 95) 00:08:58.346 8340.945 - 8400.524: 8.8338% ( 89) 00:08:58.346 8400.524 - 8460.102: 9.7416% ( 104) 00:08:58.346 8460.102 - 8519.680: 10.7018% ( 110) 00:08:58.346 8519.680 - 8579.258: 11.6795% ( 112) 00:08:58.346 8579.258 - 8638.836: 12.6833% ( 115) 00:08:58.346 8638.836 - 8698.415: 13.6610% ( 112) 00:08:58.346 8698.415 - 8757.993: 14.6648% ( 115) 00:08:58.346 8757.993 - 8817.571: 15.5814% ( 105) 00:08:58.346 8817.571 - 8877.149: 16.4979% ( 105) 00:08:58.346 8877.149 - 8936.727: 17.4057% ( 104) 00:08:58.346 8936.727 - 8996.305: 18.3572% ( 109) 00:08:58.346 8996.305 - 9055.884: 19.3087% ( 109) 00:08:58.346 9055.884 - 9115.462: 20.2252% ( 105) 00:08:58.346 9115.462 - 9175.040: 21.0981% ( 100) 00:08:58.346 9175.040 - 9234.618: 22.0059% ( 104) 00:08:58.346 9234.618 - 9294.196: 22.9312% ( 106) 00:08:58.346 9294.196 - 9353.775: 23.8041% ( 100) 00:08:58.346 9353.775 - 9413.353: 24.7119% ( 104) 00:08:58.346 9413.353 - 9472.931: 25.6459% ( 107) 00:08:58.346 9472.931 - 9532.509: 26.4839% ( 96) 00:08:58.346 9532.509 - 9592.087: 27.2696% ( 90) 00:08:58.346 9592.087 - 9651.665: 28.0552% ( 90) 00:08:58.346 9651.665 - 9711.244: 28.6313% ( 66) 00:08:58.346 9711.244 - 9770.822: 29.1288% ( 57) 00:08:58.346 9770.822 - 9830.400: 29.6177% ( 56) 00:08:58.346 9830.400 - 9889.978: 30.0716% ( 52) 00:08:58.346 9889.978 - 9949.556: 30.5255% ( 52) 00:08:58.346 9949.556 - 10009.135: 31.0143% ( 56) 00:08:58.346 10009.135 - 10068.713: 31.5031% ( 56) 00:08:58.346 10068.713 - 10128.291: 32.0007% ( 57) 00:08:58.346 10128.291 - 10187.869: 32.5244% ( 60) 00:08:58.346 10187.869 - 10247.447: 33.0831% ( 64) 00:08:58.346 10247.447 - 10307.025: 33.6156% ( 61) 00:08:58.346 10307.025 - 10366.604: 34.1655% ( 63) 00:08:58.346 10366.604 - 10426.182: 34.7154% ( 63) 00:08:58.346 10426.182 - 10485.760: 35.3701% ( 75) 00:08:58.346 10485.760 - 10545.338: 36.2867% ( 105) 00:08:58.346 10545.338 - 10604.916: 37.3429% ( 121) 00:08:58.346 10604.916 - 10664.495: 38.5649% ( 140) 00:08:58.347 10664.495 - 10724.073: 39.8219% ( 144) 00:08:58.347 10724.073 - 10783.651: 41.2098% ( 159) 00:08:58.347 10783.651 - 10843.229: 42.7025% ( 171) 00:08:58.347 10843.229 - 10902.807: 44.4047% ( 195) 00:08:58.347 10902.807 - 10962.385: 46.2116% ( 207) 00:08:58.347 10962.385 - 11021.964: 48.0534% ( 211) 00:08:58.347 11021.964 - 11081.542: 50.0524% ( 229) 00:08:58.347 11081.542 - 11141.120: 52.0688% ( 231) 00:08:58.347 11141.120 - 11200.698: 53.9979% ( 221) 00:08:58.347 11200.698 - 11260.276: 56.0405% ( 234) 00:08:58.347 11260.276 - 11319.855: 58.0918% ( 235) 00:08:58.347 11319.855 - 11379.433: 60.0646% ( 226) 00:08:58.347 11379.433 - 11439.011: 62.0985% ( 233) 00:08:58.347 11439.011 - 11498.589: 64.0974% ( 229) 00:08:58.347 11498.589 - 11558.167: 66.0964% ( 229) 00:08:58.347 11558.167 - 11617.745: 68.1041% ( 230) 00:08:58.347 11617.745 - 11677.324: 70.0506% ( 223) 00:08:58.347 11677.324 - 11736.902: 71.7441% ( 194) 00:08:58.347 11736.902 - 11796.480: 73.3851% ( 188) 00:08:58.347 11796.480 - 11856.058: 74.8778% ( 171) 00:08:58.347 11856.058 - 11915.636: 76.3006% ( 163) 00:08:58.347 11915.636 - 11975.215: 77.5489% ( 143) 00:08:58.347 11975.215 - 12034.793: 78.6400% ( 125) 00:08:58.347 12034.793 - 12094.371: 79.6177% ( 112) 00:08:58.347 12094.371 - 12153.949: 80.4207% ( 92) 00:08:58.347 12153.949 - 12213.527: 81.1714% ( 86) 00:08:58.347 12213.527 - 12273.105: 81.7388% ( 65) 00:08:58.347 12273.105 - 12332.684: 82.2277% ( 56) 00:08:58.347 12332.684 - 12392.262: 82.6903% ( 53) 00:08:58.347 12392.262 - 12451.840: 83.1791% ( 56) 00:08:58.347 12451.840 - 12511.418: 83.6418% ( 53) 00:08:58.347 12511.418 - 12570.996: 84.1044% ( 53) 00:08:58.347 12570.996 - 12630.575: 84.5321% ( 49) 00:08:58.347 12630.575 - 12690.153: 84.8900% ( 41) 00:08:58.347 12690.153 - 12749.731: 85.3090% ( 48) 00:08:58.347 12749.731 - 12809.309: 85.7018% ( 45) 00:08:58.347 12809.309 - 12868.887: 86.0772% ( 43) 00:08:58.347 12868.887 - 12928.465: 86.4612% ( 44) 00:08:58.347 12928.465 - 12988.044: 86.8104% ( 40) 00:08:58.347 12988.044 - 13047.622: 87.1596% ( 40) 00:08:58.347 13047.622 - 13107.200: 87.5524% ( 45) 00:08:58.347 13107.200 - 13166.778: 87.9888% ( 50) 00:08:58.347 13166.778 - 13226.356: 88.4166% ( 49) 00:08:58.347 13226.356 - 13285.935: 88.8617% ( 51) 00:08:58.347 13285.935 - 13345.513: 89.3331% ( 54) 00:08:58.347 13345.513 - 13405.091: 89.8219% ( 56) 00:08:58.347 13405.091 - 13464.669: 90.2846% ( 53) 00:08:58.347 13464.669 - 13524.247: 90.8170% ( 61) 00:08:58.347 13524.247 - 13583.825: 91.2884% ( 54) 00:08:58.347 13583.825 - 13643.404: 91.7249% ( 50) 00:08:58.347 13643.404 - 13702.982: 92.2137% ( 56) 00:08:58.347 13702.982 - 13762.560: 92.6589% ( 51) 00:08:58.347 13762.560 - 13822.138: 93.0953% ( 50) 00:08:58.347 13822.138 - 13881.716: 93.4881% ( 45) 00:08:58.347 13881.716 - 13941.295: 93.8547% ( 42) 00:08:58.347 13941.295 - 14000.873: 94.2301% ( 43) 00:08:58.347 14000.873 - 14060.451: 94.6229% ( 45) 00:08:58.347 14060.451 - 14120.029: 94.9633% ( 39) 00:08:58.347 14120.029 - 14179.607: 95.2514% ( 33) 00:08:58.347 14179.607 - 14239.185: 95.5220% ( 31) 00:08:58.347 14239.185 - 14298.764: 95.7926% ( 31) 00:08:58.347 14298.764 - 14358.342: 96.0807% ( 33) 00:08:58.347 14358.342 - 14417.920: 96.3163% ( 27) 00:08:58.347 14417.920 - 14477.498: 96.5171% ( 23) 00:08:58.347 14477.498 - 14537.076: 96.6393% ( 14) 00:08:58.347 14537.076 - 14596.655: 96.7703% ( 15) 00:08:58.347 14596.655 - 14656.233: 96.8663% ( 11) 00:08:58.347 14656.233 - 14715.811: 96.9274% ( 7) 00:08:58.347 14715.811 - 14775.389: 96.9710% ( 5) 00:08:58.347 14775.389 - 14834.967: 96.9797% ( 1) 00:08:58.347 14834.967 - 14894.545: 96.9972% ( 2) 00:08:58.347 14894.545 - 14954.124: 97.0147% ( 2) 00:08:58.347 14954.124 - 15013.702: 97.0321% ( 2) 00:08:58.347 15013.702 - 15073.280: 97.0496% ( 2) 00:08:58.347 15073.280 - 15132.858: 97.0670% ( 2) 00:08:58.347 15132.858 - 15192.436: 97.0845% ( 2) 00:08:58.347 15192.436 - 15252.015: 97.1020% ( 2) 00:08:58.347 15252.015 - 15371.171: 97.1543% ( 6) 00:08:58.347 15371.171 - 15490.327: 97.2503% ( 11) 00:08:58.347 15490.327 - 15609.484: 97.3900% ( 16) 00:08:58.347 15609.484 - 15728.640: 97.4773% ( 10) 00:08:58.347 15728.640 - 15847.796: 97.6170% ( 16) 00:08:58.347 15847.796 - 15966.953: 97.7479% ( 15) 00:08:58.347 15966.953 - 16086.109: 97.8788% ( 15) 00:08:58.347 16086.109 - 16205.265: 98.0447% ( 19) 00:08:58.347 16205.265 - 16324.422: 98.2105% ( 19) 00:08:58.347 16324.422 - 16443.578: 98.3939% ( 21) 00:08:58.347 16443.578 - 16562.735: 98.5684% ( 20) 00:08:58.347 16562.735 - 16681.891: 98.6906% ( 14) 00:08:58.347 16681.891 - 16801.047: 98.7517% ( 7) 00:08:58.347 16801.047 - 16920.204: 98.8041% ( 6) 00:08:58.347 16920.204 - 17039.360: 98.8565% ( 6) 00:08:58.347 17039.360 - 17158.516: 98.8827% ( 3) 00:08:58.347 29789.091 - 29908.247: 98.9001% ( 2) 00:08:58.347 29908.247 - 30027.404: 98.9176% ( 2) 00:08:58.347 30027.404 - 30146.560: 98.9351% ( 2) 00:08:58.347 30146.560 - 30265.716: 98.9612% ( 3) 00:08:58.347 30265.716 - 30384.873: 98.9787% ( 2) 00:08:58.347 30384.873 - 30504.029: 99.0049% ( 3) 00:08:58.347 30504.029 - 30742.342: 99.0485% ( 5) 00:08:58.347 30742.342 - 30980.655: 99.1009% ( 6) 00:08:58.347 30980.655 - 31218.967: 99.1446% ( 5) 00:08:58.347 31218.967 - 31457.280: 99.1969% ( 6) 00:08:58.347 31457.280 - 31695.593: 99.2406% ( 5) 00:08:58.347 31695.593 - 31933.905: 99.2929% ( 6) 00:08:58.347 31933.905 - 32172.218: 99.3366% ( 5) 00:08:58.347 32172.218 - 32410.531: 99.3890% ( 6) 00:08:58.347 32410.531 - 32648.844: 99.4326% ( 5) 00:08:58.347 32648.844 - 32887.156: 99.4413% ( 1) 00:08:58.347 38606.662 - 38844.975: 99.4850% ( 5) 00:08:58.347 38844.975 - 39083.287: 99.5112% ( 3) 00:08:58.347 39083.287 - 39321.600: 99.5635% ( 6) 00:08:58.347 39321.600 - 39559.913: 99.5985% ( 4) 00:08:58.347 39559.913 - 39798.225: 99.6508% ( 6) 00:08:58.347 39798.225 - 40036.538: 99.7032% ( 6) 00:08:58.347 40036.538 - 40274.851: 99.7469% ( 5) 00:08:58.347 40274.851 - 40513.164: 99.7905% ( 5) 00:08:58.347 40513.164 - 40751.476: 99.8429% ( 6) 00:08:58.347 40751.476 - 40989.789: 99.8953% ( 6) 00:08:58.347 40989.789 - 41228.102: 99.9389% ( 5) 00:08:58.347 41228.102 - 41466.415: 99.9913% ( 6) 00:08:58.347 41466.415 - 41704.727: 100.0000% ( 1) 00:08:58.347 00:08:58.347 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:58.347 ============================================================================== 00:08:58.347 Range in us Cumulative IO count 00:08:58.347 7328.116 - 7357.905: 0.0087% ( 1) 00:08:58.347 7357.905 - 7387.695: 0.0436% ( 4) 00:08:58.347 7387.695 - 7417.484: 0.0786% ( 4) 00:08:58.347 7417.484 - 7447.273: 0.1484% ( 8) 00:08:58.347 7447.273 - 7477.062: 0.2531% ( 12) 00:08:58.347 7477.062 - 7506.851: 0.3492% ( 11) 00:08:58.347 7506.851 - 7536.640: 0.4976% ( 17) 00:08:58.347 7536.640 - 7566.429: 0.6547% ( 18) 00:08:58.347 7566.429 - 7596.218: 0.8205% ( 19) 00:08:58.347 7596.218 - 7626.007: 1.0126% ( 22) 00:08:58.347 7626.007 - 7685.585: 1.3966% ( 44) 00:08:58.347 7685.585 - 7745.164: 1.8069% ( 47) 00:08:58.347 7745.164 - 7804.742: 2.2696% ( 53) 00:08:58.347 7804.742 - 7864.320: 2.8544% ( 67) 00:08:58.347 7864.320 - 7923.898: 3.4305% ( 66) 00:08:58.347 7923.898 - 7983.476: 4.0241% ( 68) 00:08:58.347 7983.476 - 8043.055: 4.6700% ( 74) 00:08:58.347 8043.055 - 8102.633: 5.2811% ( 70) 00:08:58.347 8102.633 - 8162.211: 5.9183% ( 73) 00:08:58.347 8162.211 - 8221.789: 6.5817% ( 76) 00:08:58.347 8221.789 - 8281.367: 7.2626% ( 78) 00:08:58.347 8281.367 - 8340.945: 8.0569% ( 91) 00:08:58.347 8340.945 - 8400.524: 8.8774% ( 94) 00:08:58.347 8400.524 - 8460.102: 9.7503% ( 100) 00:08:58.347 8460.102 - 8519.680: 10.5796% ( 95) 00:08:58.347 8519.680 - 8579.258: 11.4351% ( 98) 00:08:58.347 8579.258 - 8638.836: 12.3865% ( 109) 00:08:58.347 8638.836 - 8698.415: 13.3380% ( 109) 00:08:58.347 8698.415 - 8757.993: 14.2982% ( 110) 00:08:58.347 8757.993 - 8817.571: 15.1536% ( 98) 00:08:58.347 8817.571 - 8877.149: 16.0353% ( 101) 00:08:58.347 8877.149 - 8936.727: 16.9169% ( 101) 00:08:58.347 8936.727 - 8996.305: 17.8160% ( 103) 00:08:58.347 8996.305 - 9055.884: 18.7325% ( 105) 00:08:58.347 9055.884 - 9115.462: 19.5793% ( 97) 00:08:58.347 9115.462 - 9175.040: 20.4347% ( 98) 00:08:58.347 9175.040 - 9234.618: 21.3163% ( 101) 00:08:58.347 9234.618 - 9294.196: 22.2067% ( 102) 00:08:58.347 9294.196 - 9353.775: 23.1582% ( 109) 00:08:58.347 9353.775 - 9413.353: 24.0660% ( 104) 00:08:58.347 9413.353 - 9472.931: 24.9651% ( 103) 00:08:58.347 9472.931 - 9532.509: 25.9166% ( 109) 00:08:58.347 9532.509 - 9592.087: 26.7545% ( 96) 00:08:58.347 9592.087 - 9651.665: 27.4703% ( 82) 00:08:58.347 9651.665 - 9711.244: 28.1250% ( 75) 00:08:58.348 9711.244 - 9770.822: 28.6837% ( 64) 00:08:58.348 9770.822 - 9830.400: 29.1987% ( 59) 00:08:58.348 9830.400 - 9889.978: 29.7399% ( 62) 00:08:58.348 9889.978 - 9949.556: 30.3596% ( 71) 00:08:58.348 9949.556 - 10009.135: 30.9445% ( 67) 00:08:58.348 10009.135 - 10068.713: 31.4682% ( 60) 00:08:58.348 10068.713 - 10128.291: 32.0007% ( 61) 00:08:58.348 10128.291 - 10187.869: 32.5419% ( 62) 00:08:58.348 10187.869 - 10247.447: 33.0482% ( 58) 00:08:58.348 10247.447 - 10307.025: 33.5457% ( 57) 00:08:58.348 10307.025 - 10366.604: 34.0171% ( 54) 00:08:58.348 10366.604 - 10426.182: 34.4710% ( 52) 00:08:58.348 10426.182 - 10485.760: 35.0733% ( 69) 00:08:58.348 10485.760 - 10545.338: 35.8415% ( 88) 00:08:58.348 10545.338 - 10604.916: 36.8279% ( 113) 00:08:58.348 10604.916 - 10664.495: 37.8666% ( 119) 00:08:58.348 10664.495 - 10724.073: 39.0712% ( 138) 00:08:58.348 10724.073 - 10783.651: 40.4330% ( 156) 00:08:58.348 10783.651 - 10843.229: 42.0129% ( 181) 00:08:58.348 10843.229 - 10902.807: 43.6627% ( 189) 00:08:58.348 10902.807 - 10962.385: 45.3911% ( 198) 00:08:58.348 10962.385 - 11021.964: 47.3638% ( 226) 00:08:58.348 11021.964 - 11081.542: 49.2842% ( 220) 00:08:58.348 11081.542 - 11141.120: 51.2744% ( 228) 00:08:58.348 11141.120 - 11200.698: 53.3258% ( 235) 00:08:58.348 11200.698 - 11260.276: 55.4295% ( 241) 00:08:58.348 11260.276 - 11319.855: 57.5070% ( 238) 00:08:58.348 11319.855 - 11379.433: 59.6369% ( 244) 00:08:58.348 11379.433 - 11439.011: 61.7144% ( 238) 00:08:58.348 11439.011 - 11498.589: 63.8792% ( 248) 00:08:58.348 11498.589 - 11558.167: 66.0003% ( 243) 00:08:58.348 11558.167 - 11617.745: 68.0953% ( 240) 00:08:58.348 11617.745 - 11677.324: 70.1466% ( 235) 00:08:58.348 11677.324 - 11736.902: 71.9885% ( 211) 00:08:58.348 11736.902 - 11796.480: 73.6732% ( 193) 00:08:58.348 11796.480 - 11856.058: 75.1833% ( 173) 00:08:58.348 11856.058 - 11915.636: 76.6236% ( 165) 00:08:58.348 11915.636 - 11975.215: 77.9330% ( 150) 00:08:58.348 11975.215 - 12034.793: 79.0677% ( 130) 00:08:58.348 12034.793 - 12094.371: 80.0454% ( 112) 00:08:58.348 12094.371 - 12153.949: 80.9008% ( 98) 00:08:58.348 12153.949 - 12213.527: 81.4944% ( 68) 00:08:58.348 12213.527 - 12273.105: 81.9832% ( 56) 00:08:58.348 12273.105 - 12332.684: 82.4721% ( 56) 00:08:58.348 12332.684 - 12392.262: 82.9696% ( 57) 00:08:58.348 12392.262 - 12451.840: 83.4672% ( 57) 00:08:58.348 12451.840 - 12511.418: 83.9560% ( 56) 00:08:58.348 12511.418 - 12570.996: 84.4012% ( 51) 00:08:58.348 12570.996 - 12630.575: 84.7678% ( 42) 00:08:58.348 12630.575 - 12690.153: 85.1257% ( 41) 00:08:58.348 12690.153 - 12749.731: 85.5098% ( 44) 00:08:58.348 12749.731 - 12809.309: 85.8677% ( 41) 00:08:58.348 12809.309 - 12868.887: 86.2430% ( 43) 00:08:58.348 12868.887 - 12928.465: 86.7144% ( 54) 00:08:58.348 12928.465 - 12988.044: 87.1247% ( 47) 00:08:58.348 12988.044 - 13047.622: 87.5524% ( 49) 00:08:58.348 13047.622 - 13107.200: 87.9277% ( 43) 00:08:58.348 13107.200 - 13166.778: 88.3467% ( 48) 00:08:58.348 13166.778 - 13226.356: 88.7832% ( 50) 00:08:58.348 13226.356 - 13285.935: 89.2196% ( 50) 00:08:58.348 13285.935 - 13345.513: 89.6561% ( 50) 00:08:58.348 13345.513 - 13405.091: 90.1013% ( 51) 00:08:58.348 13405.091 - 13464.669: 90.5988% ( 57) 00:08:58.348 13464.669 - 13524.247: 91.1051% ( 58) 00:08:58.348 13524.247 - 13583.825: 91.5939% ( 56) 00:08:58.348 13583.825 - 13643.404: 92.0915% ( 57) 00:08:58.348 13643.404 - 13702.982: 92.5541% ( 53) 00:08:58.348 13702.982 - 13762.560: 93.0168% ( 53) 00:08:58.348 13762.560 - 13822.138: 93.4794% ( 53) 00:08:58.348 13822.138 - 13881.716: 93.9071% ( 49) 00:08:58.348 13881.716 - 13941.295: 94.2999% ( 45) 00:08:58.348 13941.295 - 14000.873: 94.6840% ( 44) 00:08:58.348 14000.873 - 14060.451: 95.0768% ( 45) 00:08:58.348 14060.451 - 14120.029: 95.3998% ( 37) 00:08:58.348 14120.029 - 14179.607: 95.6966% ( 34) 00:08:58.348 14179.607 - 14239.185: 95.9846% ( 33) 00:08:58.348 14239.185 - 14298.764: 96.2640% ( 32) 00:08:58.348 14298.764 - 14358.342: 96.4997% ( 27) 00:08:58.348 14358.342 - 14417.920: 96.7091% ( 24) 00:08:58.348 14417.920 - 14477.498: 96.8925% ( 21) 00:08:58.348 14477.498 - 14537.076: 97.0670% ( 20) 00:08:58.348 14537.076 - 14596.655: 97.1980% ( 15) 00:08:58.348 14596.655 - 14656.233: 97.2765% ( 9) 00:08:58.348 14656.233 - 14715.811: 97.3638% ( 10) 00:08:58.348 14715.811 - 14775.389: 97.4075% ( 5) 00:08:58.348 14775.389 - 14834.967: 97.4598% ( 6) 00:08:58.348 14834.967 - 14894.545: 97.4948% ( 4) 00:08:58.348 14894.545 - 14954.124: 97.5384% ( 5) 00:08:58.348 14954.124 - 15013.702: 97.5821% ( 5) 00:08:58.348 15013.702 - 15073.280: 97.6170% ( 4) 00:08:58.348 15073.280 - 15132.858: 97.6519% ( 4) 00:08:58.348 15132.858 - 15192.436: 97.6693% ( 2) 00:08:58.348 15192.436 - 15252.015: 97.6781% ( 1) 00:08:58.348 15252.015 - 15371.171: 97.7130% ( 4) 00:08:58.348 15371.171 - 15490.327: 97.7479% ( 4) 00:08:58.348 15490.327 - 15609.484: 97.8177% ( 8) 00:08:58.348 15609.484 - 15728.640: 97.8701% ( 6) 00:08:58.348 15728.640 - 15847.796: 97.9487% ( 9) 00:08:58.348 15847.796 - 15966.953: 98.0622% ( 13) 00:08:58.348 15966.953 - 16086.109: 98.1669% ( 12) 00:08:58.348 16086.109 - 16205.265: 98.2804% ( 13) 00:08:58.348 16205.265 - 16324.422: 98.3939% ( 13) 00:08:58.348 16324.422 - 16443.578: 98.4986% ( 12) 00:08:58.348 16443.578 - 16562.735: 98.6034% ( 12) 00:08:58.348 16562.735 - 16681.891: 98.7081% ( 12) 00:08:58.348 16681.891 - 16801.047: 98.7692% ( 7) 00:08:58.348 16801.047 - 16920.204: 98.8128% ( 5) 00:08:58.348 16920.204 - 17039.360: 98.8652% ( 6) 00:08:58.348 17039.360 - 17158.516: 98.8827% ( 2) 00:08:58.348 26691.025 - 26810.182: 98.9001% ( 2) 00:08:58.348 26810.182 - 26929.338: 98.9176% ( 2) 00:08:58.348 26929.338 - 27048.495: 98.9438% ( 3) 00:08:58.348 27048.495 - 27167.651: 98.9700% ( 3) 00:08:58.348 27167.651 - 27286.807: 98.9874% ( 2) 00:08:58.348 27286.807 - 27405.964: 99.0136% ( 3) 00:08:58.348 27405.964 - 27525.120: 99.0398% ( 3) 00:08:58.348 27525.120 - 27644.276: 99.0573% ( 2) 00:08:58.348 27644.276 - 27763.433: 99.0834% ( 3) 00:08:58.348 27763.433 - 27882.589: 99.1009% ( 2) 00:08:58.348 27882.589 - 28001.745: 99.1271% ( 3) 00:08:58.348 28001.745 - 28120.902: 99.1446% ( 2) 00:08:58.348 28120.902 - 28240.058: 99.1707% ( 3) 00:08:58.348 28240.058 - 28359.215: 99.1882% ( 2) 00:08:58.348 28359.215 - 28478.371: 99.2144% ( 3) 00:08:58.348 28478.371 - 28597.527: 99.2318% ( 2) 00:08:58.348 28597.527 - 28716.684: 99.2580% ( 3) 00:08:58.348 28716.684 - 28835.840: 99.2842% ( 3) 00:08:58.348 28835.840 - 28954.996: 99.3104% ( 3) 00:08:58.348 28954.996 - 29074.153: 99.3279% ( 2) 00:08:58.348 29074.153 - 29193.309: 99.3541% ( 3) 00:08:58.348 29193.309 - 29312.465: 99.3715% ( 2) 00:08:58.348 29312.465 - 29431.622: 99.3890% ( 2) 00:08:58.348 29431.622 - 29550.778: 99.4152% ( 3) 00:08:58.348 29550.778 - 29669.935: 99.4413% ( 3) 00:08:58.348 35985.222 - 36223.535: 99.4501% ( 1) 00:08:58.348 36223.535 - 36461.847: 99.4850% ( 4) 00:08:58.348 36461.847 - 36700.160: 99.5374% ( 6) 00:08:58.348 36700.160 - 36938.473: 99.5810% ( 5) 00:08:58.348 36938.473 - 37176.785: 99.6159% ( 4) 00:08:58.348 37176.785 - 37415.098: 99.6596% ( 5) 00:08:58.348 37415.098 - 37653.411: 99.7032% ( 5) 00:08:58.348 37653.411 - 37891.724: 99.7469% ( 5) 00:08:58.348 37891.724 - 38130.036: 99.7992% ( 6) 00:08:58.348 38130.036 - 38368.349: 99.8429% ( 5) 00:08:58.348 38368.349 - 38606.662: 99.8865% ( 5) 00:08:58.348 38606.662 - 38844.975: 99.9302% ( 5) 00:08:58.348 38844.975 - 39083.287: 99.9825% ( 6) 00:08:58.348 39083.287 - 39321.600: 100.0000% ( 2) 00:08:58.348 00:08:58.348 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:58.348 ============================================================================== 00:08:58.348 Range in us Cumulative IO count 00:08:58.348 7298.327 - 7328.116: 0.0262% ( 3) 00:08:58.348 7328.116 - 7357.905: 0.0611% ( 4) 00:08:58.348 7357.905 - 7387.695: 0.0960% ( 4) 00:08:58.348 7387.695 - 7417.484: 0.1397% ( 5) 00:08:58.348 7417.484 - 7447.273: 0.1833% ( 5) 00:08:58.348 7447.273 - 7477.062: 0.2619% ( 9) 00:08:58.348 7477.062 - 7506.851: 0.3579% ( 11) 00:08:58.348 7506.851 - 7536.640: 0.4976% ( 16) 00:08:58.348 7536.640 - 7566.429: 0.6198% ( 14) 00:08:58.348 7566.429 - 7596.218: 0.7594% ( 16) 00:08:58.348 7596.218 - 7626.007: 0.8991% ( 16) 00:08:58.348 7626.007 - 7685.585: 1.2483% ( 40) 00:08:58.348 7685.585 - 7745.164: 1.6672% ( 48) 00:08:58.348 7745.164 - 7804.742: 2.1735% ( 58) 00:08:58.348 7804.742 - 7864.320: 2.6885% ( 59) 00:08:58.348 7864.320 - 7923.898: 3.3083% ( 71) 00:08:58.348 7923.898 - 7983.476: 3.8932% ( 67) 00:08:58.348 7983.476 - 8043.055: 4.4780% ( 67) 00:08:58.348 8043.055 - 8102.633: 5.0803% ( 69) 00:08:58.348 8102.633 - 8162.211: 5.7088% ( 72) 00:08:58.348 8162.211 - 8221.789: 6.3897% ( 78) 00:08:58.348 8221.789 - 8281.367: 7.1316% ( 85) 00:08:58.348 8281.367 - 8340.945: 7.9347% ( 92) 00:08:58.348 8340.945 - 8400.524: 8.7552% ( 94) 00:08:58.348 8400.524 - 8460.102: 9.6369% ( 101) 00:08:58.348 8460.102 - 8519.680: 10.5883% ( 109) 00:08:58.348 8519.680 - 8579.258: 11.5573% ( 111) 00:08:58.348 8579.258 - 8638.836: 12.5262% ( 111) 00:08:58.348 8638.836 - 8698.415: 13.4777% ( 109) 00:08:58.348 8698.415 - 8757.993: 14.3942% ( 105) 00:08:58.348 8757.993 - 8817.571: 15.2060% ( 93) 00:08:58.348 8817.571 - 8877.149: 16.0527% ( 97) 00:08:58.348 8877.149 - 8936.727: 16.9169% ( 99) 00:08:58.348 8936.727 - 8996.305: 17.7462% ( 95) 00:08:58.348 8996.305 - 9055.884: 18.6278% ( 101) 00:08:58.348 9055.884 - 9115.462: 19.5269% ( 103) 00:08:58.348 9115.462 - 9175.040: 20.4696% ( 108) 00:08:58.348 9175.040 - 9234.618: 21.3774% ( 104) 00:08:58.348 9234.618 - 9294.196: 22.2853% ( 104) 00:08:58.348 9294.196 - 9353.775: 23.2193% ( 107) 00:08:58.348 9353.775 - 9413.353: 24.1271% ( 104) 00:08:58.348 9413.353 - 9472.931: 25.0786% ( 109) 00:08:58.348 9472.931 - 9532.509: 25.9951% ( 105) 00:08:58.348 9532.509 - 9592.087: 26.8855% ( 102) 00:08:58.348 9592.087 - 9651.665: 27.6711% ( 90) 00:08:58.348 9651.665 - 9711.244: 28.3345% ( 76) 00:08:58.348 9711.244 - 9770.822: 28.9193% ( 67) 00:08:58.348 9770.822 - 9830.400: 29.4780% ( 64) 00:08:58.349 9830.400 - 9889.978: 30.0367% ( 64) 00:08:58.349 9889.978 - 9949.556: 30.5866% ( 63) 00:08:58.349 9949.556 - 10009.135: 31.1103% ( 60) 00:08:58.349 10009.135 - 10068.713: 31.5992% ( 56) 00:08:58.349 10068.713 - 10128.291: 32.1054% ( 58) 00:08:58.349 10128.291 - 10187.869: 32.6205% ( 59) 00:08:58.349 10187.869 - 10247.447: 33.1180% ( 57) 00:08:58.349 10247.447 - 10307.025: 33.5981% ( 55) 00:08:58.349 10307.025 - 10366.604: 34.0869% ( 56) 00:08:58.349 10366.604 - 10426.182: 34.6020% ( 59) 00:08:58.349 10426.182 - 10485.760: 35.2392% ( 73) 00:08:58.349 10485.760 - 10545.338: 36.0248% ( 90) 00:08:58.349 10545.338 - 10604.916: 37.0810% ( 121) 00:08:58.349 10604.916 - 10664.495: 38.1809% ( 126) 00:08:58.349 10664.495 - 10724.073: 39.3593% ( 135) 00:08:58.349 10724.073 - 10783.651: 40.7821% ( 163) 00:08:58.349 10783.651 - 10843.229: 42.3708% ( 182) 00:08:58.349 10843.229 - 10902.807: 44.0730% ( 195) 00:08:58.349 10902.807 - 10962.385: 45.9235% ( 212) 00:08:58.349 10962.385 - 11021.964: 47.8963% ( 226) 00:08:58.349 11021.964 - 11081.542: 49.8254% ( 221) 00:08:58.349 11081.542 - 11141.120: 51.8942% ( 237) 00:08:58.349 11141.120 - 11200.698: 53.9717% ( 238) 00:08:58.349 11200.698 - 11260.276: 56.0929% ( 243) 00:08:58.349 11260.276 - 11319.855: 58.2402% ( 246) 00:08:58.349 11319.855 - 11379.433: 60.3963% ( 247) 00:08:58.349 11379.433 - 11439.011: 62.4825% ( 239) 00:08:58.349 11439.011 - 11498.589: 64.6037% ( 243) 00:08:58.349 11498.589 - 11558.167: 66.7249% ( 243) 00:08:58.349 11558.167 - 11617.745: 68.8111% ( 239) 00:08:58.349 11617.745 - 11677.324: 70.8101% ( 229) 00:08:58.349 11677.324 - 11736.902: 72.5995% ( 205) 00:08:58.349 11736.902 - 11796.480: 74.2580% ( 190) 00:08:58.349 11796.480 - 11856.058: 75.8031% ( 177) 00:08:58.349 11856.058 - 11915.636: 77.2172% ( 162) 00:08:58.349 11915.636 - 11975.215: 78.5265% ( 150) 00:08:58.349 11975.215 - 12034.793: 79.5391% ( 116) 00:08:58.349 12034.793 - 12094.371: 80.2462% ( 81) 00:08:58.349 12094.371 - 12153.949: 80.9183% ( 77) 00:08:58.349 12153.949 - 12213.527: 81.4071% ( 56) 00:08:58.349 12213.527 - 12273.105: 81.8174% ( 47) 00:08:58.349 12273.105 - 12332.684: 82.2626% ( 51) 00:08:58.349 12332.684 - 12392.262: 82.6728% ( 47) 00:08:58.349 12392.262 - 12451.840: 83.0395% ( 42) 00:08:58.349 12451.840 - 12511.418: 83.3973% ( 41) 00:08:58.349 12511.418 - 12570.996: 83.7465% ( 40) 00:08:58.349 12570.996 - 12630.575: 84.0520% ( 35) 00:08:58.349 12630.575 - 12690.153: 84.4186% ( 42) 00:08:58.349 12690.153 - 12749.731: 84.7765% ( 41) 00:08:58.349 12749.731 - 12809.309: 85.1519% ( 43) 00:08:58.349 12809.309 - 12868.887: 85.5447% ( 45) 00:08:58.349 12868.887 - 12928.465: 85.9724% ( 49) 00:08:58.349 12928.465 - 12988.044: 86.4089% ( 50) 00:08:58.349 12988.044 - 13047.622: 86.8453% ( 50) 00:08:58.349 13047.622 - 13107.200: 87.2294% ( 44) 00:08:58.349 13107.200 - 13166.778: 87.6484% ( 48) 00:08:58.349 13166.778 - 13226.356: 88.0761% ( 49) 00:08:58.349 13226.356 - 13285.935: 88.5911% ( 59) 00:08:58.349 13285.935 - 13345.513: 89.1149% ( 60) 00:08:58.349 13345.513 - 13405.091: 89.6561% ( 62) 00:08:58.349 13405.091 - 13464.669: 90.1362% ( 55) 00:08:58.349 13464.669 - 13524.247: 90.6512% ( 59) 00:08:58.349 13524.247 - 13583.825: 91.1662% ( 59) 00:08:58.349 13583.825 - 13643.404: 91.6725% ( 58) 00:08:58.349 13643.404 - 13702.982: 92.1439% ( 54) 00:08:58.349 13702.982 - 13762.560: 92.5978% ( 52) 00:08:58.349 13762.560 - 13822.138: 93.0691% ( 54) 00:08:58.349 13822.138 - 13881.716: 93.5143% ( 51) 00:08:58.349 13881.716 - 13941.295: 93.9071% ( 45) 00:08:58.349 13941.295 - 14000.873: 94.2563% ( 40) 00:08:58.349 14000.873 - 14060.451: 94.6229% ( 42) 00:08:58.349 14060.451 - 14120.029: 94.9895% ( 42) 00:08:58.349 14120.029 - 14179.607: 95.3300% ( 39) 00:08:58.349 14179.607 - 14239.185: 95.6006% ( 31) 00:08:58.349 14239.185 - 14298.764: 95.8886% ( 33) 00:08:58.349 14298.764 - 14358.342: 96.1330% ( 28) 00:08:58.349 14358.342 - 14417.920: 96.3774% ( 28) 00:08:58.349 14417.920 - 14477.498: 96.5433% ( 19) 00:08:58.349 14477.498 - 14537.076: 96.7004% ( 18) 00:08:58.349 14537.076 - 14596.655: 96.8139% ( 13) 00:08:58.349 14596.655 - 14656.233: 96.9012% ( 10) 00:08:58.349 14656.233 - 14715.811: 96.9710% ( 8) 00:08:58.349 14715.811 - 14775.389: 97.0409% ( 8) 00:08:58.349 14775.389 - 14834.967: 97.0932% ( 6) 00:08:58.349 14834.967 - 14894.545: 97.1631% ( 8) 00:08:58.349 14894.545 - 14954.124: 97.2329% ( 8) 00:08:58.349 14954.124 - 15013.702: 97.2853% ( 6) 00:08:58.349 15013.702 - 15073.280: 97.3202% ( 4) 00:08:58.349 15073.280 - 15132.858: 97.3551% ( 4) 00:08:58.349 15132.858 - 15192.436: 97.3987% ( 5) 00:08:58.349 15192.436 - 15252.015: 97.4249% ( 3) 00:08:58.349 15252.015 - 15371.171: 97.5122% ( 10) 00:08:58.349 15371.171 - 15490.327: 97.5733% ( 7) 00:08:58.349 15490.327 - 15609.484: 97.6432% ( 8) 00:08:58.349 15609.484 - 15728.640: 97.7217% ( 9) 00:08:58.349 15728.640 - 15847.796: 97.8265% ( 12) 00:08:58.349 15847.796 - 15966.953: 97.9661% ( 16) 00:08:58.349 15966.953 - 16086.109: 98.1145% ( 17) 00:08:58.349 16086.109 - 16205.265: 98.2542% ( 16) 00:08:58.349 16205.265 - 16324.422: 98.3677% ( 13) 00:08:58.349 16324.422 - 16443.578: 98.4724% ( 12) 00:08:58.349 16443.578 - 16562.735: 98.5859% ( 13) 00:08:58.349 16562.735 - 16681.891: 98.6819% ( 11) 00:08:58.349 16681.891 - 16801.047: 98.7779% ( 11) 00:08:58.349 16801.047 - 16920.204: 98.8303% ( 6) 00:08:58.349 16920.204 - 17039.360: 98.8827% ( 6) 00:08:58.349 23116.335 - 23235.491: 98.9089% ( 3) 00:08:58.349 23235.491 - 23354.647: 98.9263% ( 2) 00:08:58.349 23354.647 - 23473.804: 98.9525% ( 3) 00:08:58.349 23473.804 - 23592.960: 98.9787% ( 3) 00:08:58.349 23592.960 - 23712.116: 98.9962% ( 2) 00:08:58.349 23712.116 - 23831.273: 99.0223% ( 3) 00:08:58.349 23831.273 - 23950.429: 99.0398% ( 2) 00:08:58.349 23950.429 - 24069.585: 99.0660% ( 3) 00:08:58.349 24069.585 - 24188.742: 99.0922% ( 3) 00:08:58.349 24188.742 - 24307.898: 99.1184% ( 3) 00:08:58.349 24307.898 - 24427.055: 99.1446% ( 3) 00:08:58.349 24427.055 - 24546.211: 99.1620% ( 2) 00:08:58.349 24546.211 - 24665.367: 99.1882% ( 3) 00:08:58.349 24665.367 - 24784.524: 99.2144% ( 3) 00:08:58.349 24784.524 - 24903.680: 99.2318% ( 2) 00:08:58.349 24903.680 - 25022.836: 99.2493% ( 2) 00:08:58.349 25022.836 - 25141.993: 99.2755% ( 3) 00:08:58.349 25141.993 - 25261.149: 99.3017% ( 3) 00:08:58.349 25261.149 - 25380.305: 99.3279% ( 3) 00:08:58.349 25380.305 - 25499.462: 99.3453% ( 2) 00:08:58.349 25499.462 - 25618.618: 99.3715% ( 3) 00:08:58.349 25618.618 - 25737.775: 99.3890% ( 2) 00:08:58.349 25737.775 - 25856.931: 99.4152% ( 3) 00:08:58.349 25856.931 - 25976.087: 99.4413% ( 3) 00:08:58.349 32410.531 - 32648.844: 99.4588% ( 2) 00:08:58.349 32648.844 - 32887.156: 99.4937% ( 4) 00:08:58.349 32887.156 - 33125.469: 99.5461% ( 6) 00:08:58.349 33125.469 - 33363.782: 99.5897% ( 5) 00:08:58.349 33363.782 - 33602.095: 99.6334% ( 5) 00:08:58.349 33602.095 - 33840.407: 99.6858% ( 6) 00:08:58.349 33840.407 - 34078.720: 99.7381% ( 6) 00:08:58.349 34078.720 - 34317.033: 99.7730% ( 4) 00:08:58.349 34317.033 - 34555.345: 99.8167% ( 5) 00:08:58.349 34555.345 - 34793.658: 99.8691% ( 6) 00:08:58.349 34793.658 - 35031.971: 99.9127% ( 5) 00:08:58.349 35031.971 - 35270.284: 99.9651% ( 6) 00:08:58.349 35270.284 - 35508.596: 100.0000% ( 4) 00:08:58.349 00:08:58.349 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:58.349 ============================================================================== 00:08:58.349 Range in us Cumulative IO count 00:08:58.349 7268.538 - 7298.327: 0.0087% ( 1) 00:08:58.349 7298.327 - 7328.116: 0.0262% ( 2) 00:08:58.349 7328.116 - 7357.905: 0.0786% ( 6) 00:08:58.349 7357.905 - 7387.695: 0.1222% ( 5) 00:08:58.349 7387.695 - 7417.484: 0.1833% ( 7) 00:08:58.349 7417.484 - 7447.273: 0.2531% ( 8) 00:08:58.349 7447.273 - 7477.062: 0.3404% ( 10) 00:08:58.349 7477.062 - 7506.851: 0.4539% ( 13) 00:08:58.349 7506.851 - 7536.640: 0.5761% ( 14) 00:08:58.349 7536.640 - 7566.429: 0.6983% ( 14) 00:08:58.349 7566.429 - 7596.218: 0.8467% ( 17) 00:08:58.349 7596.218 - 7626.007: 0.9602% ( 13) 00:08:58.349 7626.007 - 7685.585: 1.2919% ( 38) 00:08:58.349 7685.585 - 7745.164: 1.6672% ( 43) 00:08:58.349 7745.164 - 7804.742: 2.1735% ( 58) 00:08:58.349 7804.742 - 7864.320: 2.6711% ( 57) 00:08:58.349 7864.320 - 7923.898: 3.2385% ( 65) 00:08:58.349 7923.898 - 7983.476: 3.7971% ( 64) 00:08:58.349 7983.476 - 8043.055: 4.3383% ( 62) 00:08:58.349 8043.055 - 8102.633: 4.9145% ( 66) 00:08:58.349 8102.633 - 8162.211: 5.4993% ( 67) 00:08:58.350 8162.211 - 8221.789: 6.0929% ( 68) 00:08:58.350 8221.789 - 8281.367: 6.7737% ( 78) 00:08:58.350 8281.367 - 8340.945: 7.4983% ( 83) 00:08:58.350 8340.945 - 8400.524: 8.4061% ( 104) 00:08:58.350 8400.524 - 8460.102: 9.3750% ( 111) 00:08:58.350 8460.102 - 8519.680: 10.3439% ( 111) 00:08:58.350 8519.680 - 8579.258: 11.2867% ( 108) 00:08:58.350 8579.258 - 8638.836: 12.2643% ( 112) 00:08:58.350 8638.836 - 8698.415: 13.2158% ( 109) 00:08:58.350 8698.415 - 8757.993: 14.0887% ( 100) 00:08:58.350 8757.993 - 8817.571: 14.9878% ( 103) 00:08:58.350 8817.571 - 8877.149: 15.9480% ( 110) 00:08:58.350 8877.149 - 8936.727: 16.9169% ( 111) 00:08:58.350 8936.727 - 8996.305: 17.7898% ( 100) 00:08:58.350 8996.305 - 9055.884: 18.6976% ( 104) 00:08:58.350 9055.884 - 9115.462: 19.7102% ( 116) 00:08:58.350 9115.462 - 9175.040: 20.7053% ( 114) 00:08:58.350 9175.040 - 9234.618: 21.7266% ( 117) 00:08:58.350 9234.618 - 9294.196: 22.7217% ( 114) 00:08:58.350 9294.196 - 9353.775: 23.6994% ( 112) 00:08:58.350 9353.775 - 9413.353: 24.7119% ( 116) 00:08:58.350 9413.353 - 9472.931: 25.6547% ( 108) 00:08:58.350 9472.931 - 9532.509: 26.4839% ( 95) 00:08:58.350 9532.509 - 9592.087: 27.3132% ( 95) 00:08:58.350 9592.087 - 9651.665: 28.0464% ( 84) 00:08:58.350 9651.665 - 9711.244: 28.7011% ( 75) 00:08:58.350 9711.244 - 9770.822: 29.2336% ( 61) 00:08:58.350 9770.822 - 9830.400: 29.7835% ( 63) 00:08:58.350 9830.400 - 9889.978: 30.3160% ( 61) 00:08:58.350 9889.978 - 9949.556: 30.8834% ( 65) 00:08:58.350 9949.556 - 10009.135: 31.3984% ( 59) 00:08:58.350 10009.135 - 10068.713: 31.9309% ( 61) 00:08:58.350 10068.713 - 10128.291: 32.4808% ( 63) 00:08:58.350 10128.291 - 10187.869: 32.9696% ( 56) 00:08:58.350 10187.869 - 10247.447: 33.4410% ( 54) 00:08:58.350 10247.447 - 10307.025: 33.9124% ( 54) 00:08:58.350 10307.025 - 10366.604: 34.3226% ( 47) 00:08:58.350 10366.604 - 10426.182: 34.7940% ( 54) 00:08:58.350 10426.182 - 10485.760: 35.4225% ( 72) 00:08:58.350 10485.760 - 10545.338: 36.1383% ( 82) 00:08:58.350 10545.338 - 10604.916: 37.0810% ( 108) 00:08:58.350 10604.916 - 10664.495: 38.1547% ( 123) 00:08:58.350 10664.495 - 10724.073: 39.4378% ( 147) 00:08:58.350 10724.073 - 10783.651: 40.8607% ( 163) 00:08:58.350 10783.651 - 10843.229: 42.4406% ( 181) 00:08:58.350 10843.229 - 10902.807: 44.0992% ( 190) 00:08:58.350 10902.807 - 10962.385: 45.9235% ( 209) 00:08:58.350 10962.385 - 11021.964: 47.7916% ( 214) 00:08:58.350 11021.964 - 11081.542: 49.7294% ( 222) 00:08:58.350 11081.542 - 11141.120: 51.6847% ( 224) 00:08:58.350 11141.120 - 11200.698: 53.7448% ( 236) 00:08:58.350 11200.698 - 11260.276: 55.7874% ( 234) 00:08:58.350 11260.276 - 11319.855: 57.9784% ( 251) 00:08:58.350 11319.855 - 11379.433: 60.0995% ( 243) 00:08:58.350 11379.433 - 11439.011: 62.2469% ( 246) 00:08:58.350 11439.011 - 11498.589: 64.4291% ( 250) 00:08:58.350 11498.589 - 11558.167: 66.5852% ( 247) 00:08:58.350 11558.167 - 11617.745: 68.6278% ( 234) 00:08:58.350 11617.745 - 11677.324: 70.6617% ( 233) 00:08:58.350 11677.324 - 11736.902: 72.5122% ( 212) 00:08:58.350 11736.902 - 11796.480: 74.1533% ( 188) 00:08:58.350 11796.480 - 11856.058: 75.6547% ( 172) 00:08:58.350 11856.058 - 11915.636: 77.0077% ( 155) 00:08:58.350 11915.636 - 11975.215: 78.1425% ( 130) 00:08:58.350 11975.215 - 12034.793: 79.1114% ( 111) 00:08:58.350 12034.793 - 12094.371: 79.9232% ( 93) 00:08:58.350 12094.371 - 12153.949: 80.5517% ( 72) 00:08:58.350 12153.949 - 12213.527: 81.0405% ( 56) 00:08:58.350 12213.527 - 12273.105: 81.4595% ( 48) 00:08:58.350 12273.105 - 12332.684: 81.8698% ( 47) 00:08:58.350 12332.684 - 12392.262: 82.2451% ( 43) 00:08:58.350 12392.262 - 12451.840: 82.6466% ( 46) 00:08:58.350 12451.840 - 12511.418: 83.0569% ( 47) 00:08:58.350 12511.418 - 12570.996: 83.4410% ( 44) 00:08:58.350 12570.996 - 12630.575: 83.8251% ( 44) 00:08:58.350 12630.575 - 12690.153: 84.2266% ( 46) 00:08:58.350 12690.153 - 12749.731: 84.6107% ( 44) 00:08:58.350 12749.731 - 12809.309: 85.0035% ( 45) 00:08:58.350 12809.309 - 12868.887: 85.4050% ( 46) 00:08:58.350 12868.887 - 12928.465: 85.8589% ( 52) 00:08:58.350 12928.465 - 12988.044: 86.2954% ( 50) 00:08:58.350 12988.044 - 13047.622: 86.7406% ( 51) 00:08:58.350 13047.622 - 13107.200: 87.2294% ( 56) 00:08:58.350 13107.200 - 13166.778: 87.7444% ( 59) 00:08:58.350 13166.778 - 13226.356: 88.2594% ( 59) 00:08:58.350 13226.356 - 13285.935: 88.7657% ( 58) 00:08:58.350 13285.935 - 13345.513: 89.2982% ( 61) 00:08:58.350 13345.513 - 13405.091: 89.8394% ( 62) 00:08:58.350 13405.091 - 13464.669: 90.3980% ( 64) 00:08:58.350 13464.669 - 13524.247: 90.9218% ( 60) 00:08:58.350 13524.247 - 13583.825: 91.4193% ( 57) 00:08:58.350 13583.825 - 13643.404: 91.8820% ( 53) 00:08:58.350 13643.404 - 13702.982: 92.3534% ( 54) 00:08:58.350 13702.982 - 13762.560: 92.8334% ( 55) 00:08:58.350 13762.560 - 13822.138: 93.2263% ( 45) 00:08:58.350 13822.138 - 13881.716: 93.6191% ( 45) 00:08:58.350 13881.716 - 13941.295: 94.0206% ( 46) 00:08:58.350 13941.295 - 14000.873: 94.3959% ( 43) 00:08:58.350 14000.873 - 14060.451: 94.7277% ( 38) 00:08:58.350 14060.451 - 14120.029: 94.9983% ( 31) 00:08:58.350 14120.029 - 14179.607: 95.2776% ( 32) 00:08:58.350 14179.607 - 14239.185: 95.5045% ( 26) 00:08:58.350 14239.185 - 14298.764: 95.7402% ( 27) 00:08:58.350 14298.764 - 14358.342: 95.9759% ( 27) 00:08:58.350 14358.342 - 14417.920: 96.1330% ( 18) 00:08:58.350 14417.920 - 14477.498: 96.2552% ( 14) 00:08:58.350 14477.498 - 14537.076: 96.3774% ( 14) 00:08:58.350 14537.076 - 14596.655: 96.4909% ( 13) 00:08:58.350 14596.655 - 14656.233: 96.5869% ( 11) 00:08:58.350 14656.233 - 14715.811: 96.6568% ( 8) 00:08:58.350 14715.811 - 14775.389: 96.7441% ( 10) 00:08:58.350 14775.389 - 14834.967: 96.8226% ( 9) 00:08:58.350 14834.967 - 14894.545: 96.8837% ( 7) 00:08:58.350 14894.545 - 14954.124: 96.9448% ( 7) 00:08:58.350 14954.124 - 15013.702: 96.9885% ( 5) 00:08:58.350 15013.702 - 15073.280: 97.0234% ( 4) 00:08:58.350 15073.280 - 15132.858: 97.0845% ( 7) 00:08:58.350 15132.858 - 15192.436: 97.1281% ( 5) 00:08:58.350 15192.436 - 15252.015: 97.1543% ( 3) 00:08:58.350 15252.015 - 15371.171: 97.2329% ( 9) 00:08:58.350 15371.171 - 15490.327: 97.3115% ( 9) 00:08:58.350 15490.327 - 15609.484: 97.3987% ( 10) 00:08:58.350 15609.484 - 15728.640: 97.5297% ( 15) 00:08:58.350 15728.640 - 15847.796: 97.6519% ( 14) 00:08:58.350 15847.796 - 15966.953: 97.7916% ( 16) 00:08:58.350 15966.953 - 16086.109: 97.9312% ( 16) 00:08:58.350 16086.109 - 16205.265: 98.0796% ( 17) 00:08:58.350 16205.265 - 16324.422: 98.2280% ( 17) 00:08:58.350 16324.422 - 16443.578: 98.3764% ( 17) 00:08:58.350 16443.578 - 16562.735: 98.5161% ( 16) 00:08:58.350 16562.735 - 16681.891: 98.6295% ( 13) 00:08:58.350 16681.891 - 16801.047: 98.7343% ( 12) 00:08:58.350 16801.047 - 16920.204: 98.8303% ( 11) 00:08:58.350 16920.204 - 17039.360: 98.8827% ( 6) 00:08:58.350 19541.644 - 19660.800: 98.9089% ( 3) 00:08:58.350 19660.800 - 19779.956: 98.9263% ( 2) 00:08:58.350 19779.956 - 19899.113: 98.9438% ( 2) 00:08:58.350 19899.113 - 20018.269: 98.9700% ( 3) 00:08:58.350 20018.269 - 20137.425: 98.9962% ( 3) 00:08:58.350 20137.425 - 20256.582: 99.0136% ( 2) 00:08:58.350 20256.582 - 20375.738: 99.0398% ( 3) 00:08:58.350 20375.738 - 20494.895: 99.0660% ( 3) 00:08:58.350 20494.895 - 20614.051: 99.0834% ( 2) 00:08:58.350 20614.051 - 20733.207: 99.1096% ( 3) 00:08:58.350 20733.207 - 20852.364: 99.1271% ( 2) 00:08:58.350 20852.364 - 20971.520: 99.1533% ( 3) 00:08:58.350 20971.520 - 21090.676: 99.1795% ( 3) 00:08:58.350 21090.676 - 21209.833: 99.1969% ( 2) 00:08:58.350 21209.833 - 21328.989: 99.2231% ( 3) 00:08:58.350 21328.989 - 21448.145: 99.2406% ( 2) 00:08:58.350 21448.145 - 21567.302: 99.2668% ( 3) 00:08:58.350 21567.302 - 21686.458: 99.2929% ( 3) 00:08:58.350 21686.458 - 21805.615: 99.3104% ( 2) 00:08:58.350 21805.615 - 21924.771: 99.3366% ( 3) 00:08:58.350 21924.771 - 22043.927: 99.3541% ( 2) 00:08:58.350 22043.927 - 22163.084: 99.3715% ( 2) 00:08:58.350 22163.084 - 22282.240: 99.3977% ( 3) 00:08:58.350 22282.240 - 22401.396: 99.4239% ( 3) 00:08:58.350 22401.396 - 22520.553: 99.4413% ( 2) 00:08:58.350 28954.996 - 29074.153: 99.4588% ( 2) 00:08:58.350 29074.153 - 29193.309: 99.4850% ( 3) 00:08:58.350 29193.309 - 29312.465: 99.5024% ( 2) 00:08:58.350 29312.465 - 29431.622: 99.5286% ( 3) 00:08:58.350 29431.622 - 29550.778: 99.5548% ( 3) 00:08:58.350 29550.778 - 29669.935: 99.5810% ( 3) 00:08:58.350 29669.935 - 29789.091: 99.5897% ( 1) 00:08:58.350 29789.091 - 29908.247: 99.6072% ( 2) 00:08:58.350 29908.247 - 30027.404: 99.6334% ( 3) 00:08:58.350 30027.404 - 30146.560: 99.6596% ( 3) 00:08:58.350 30146.560 - 30265.716: 99.6858% ( 3) 00:08:58.350 30265.716 - 30384.873: 99.7119% ( 3) 00:08:58.350 30384.873 - 30504.029: 99.7294% ( 2) 00:08:58.350 30504.029 - 30742.342: 99.7730% ( 5) 00:08:58.350 30742.342 - 30980.655: 99.8254% ( 6) 00:08:58.350 30980.655 - 31218.967: 99.8691% ( 5) 00:08:58.350 31218.967 - 31457.280: 99.9127% ( 5) 00:08:58.350 31457.280 - 31695.593: 99.9651% ( 6) 00:08:58.350 31695.593 - 31933.905: 100.0000% ( 4) 00:08:58.350 00:08:58.350 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:58.350 ============================================================================== 00:08:58.350 Range in us Cumulative IO count 00:08:58.350 7328.116 - 7357.905: 0.0175% ( 2) 00:08:58.350 7357.905 - 7387.695: 0.0524% ( 4) 00:08:58.350 7387.695 - 7417.484: 0.1135% ( 7) 00:08:58.350 7417.484 - 7447.273: 0.1920% ( 9) 00:08:58.350 7447.273 - 7477.062: 0.3142% ( 14) 00:08:58.350 7477.062 - 7506.851: 0.4539% ( 16) 00:08:58.350 7506.851 - 7536.640: 0.5848% ( 15) 00:08:58.350 7536.640 - 7566.429: 0.7158% ( 15) 00:08:58.350 7566.429 - 7596.218: 0.8554% ( 16) 00:08:58.350 7596.218 - 7626.007: 1.0213% ( 19) 00:08:58.350 7626.007 - 7685.585: 1.3530% ( 38) 00:08:58.350 7685.585 - 7745.164: 1.7371% ( 44) 00:08:58.350 7745.164 - 7804.742: 2.1910% ( 52) 00:08:58.350 7804.742 - 7864.320: 2.7235% ( 61) 00:08:58.350 7864.320 - 7923.898: 3.2821% ( 64) 00:08:58.350 7923.898 - 7983.476: 3.8408% ( 64) 00:08:58.350 7983.476 - 8043.055: 4.4169% ( 66) 00:08:58.351 8043.055 - 8102.633: 4.9756% ( 64) 00:08:58.351 8102.633 - 8162.211: 5.5691% ( 68) 00:08:58.351 8162.211 - 8221.789: 6.1540% ( 67) 00:08:58.351 8221.789 - 8281.367: 6.8436% ( 79) 00:08:58.351 8281.367 - 8340.945: 7.6554% ( 93) 00:08:58.351 8340.945 - 8400.524: 8.5196% ( 99) 00:08:58.351 8400.524 - 8460.102: 9.4186% ( 103) 00:08:58.351 8460.102 - 8519.680: 10.4574% ( 119) 00:08:58.351 8519.680 - 8579.258: 11.4001% ( 108) 00:08:58.351 8579.258 - 8638.836: 12.4214% ( 117) 00:08:58.351 8638.836 - 8698.415: 13.4253% ( 115) 00:08:58.351 8698.415 - 8757.993: 14.3680% ( 108) 00:08:58.351 8757.993 - 8817.571: 15.2147% ( 97) 00:08:58.351 8817.571 - 8877.149: 16.0265% ( 93) 00:08:58.351 8877.149 - 8936.727: 16.9169% ( 102) 00:08:58.351 8936.727 - 8996.305: 17.8858% ( 111) 00:08:58.351 8996.305 - 9055.884: 18.9246% ( 119) 00:08:58.351 9055.884 - 9115.462: 19.9459% ( 117) 00:08:58.351 9115.462 - 9175.040: 20.9323% ( 113) 00:08:58.351 9175.040 - 9234.618: 21.8837% ( 109) 00:08:58.351 9234.618 - 9294.196: 22.8876% ( 115) 00:08:58.351 9294.196 - 9353.775: 23.9001% ( 116) 00:08:58.351 9353.775 - 9413.353: 24.8691% ( 111) 00:08:58.351 9413.353 - 9472.931: 25.8904% ( 117) 00:08:58.351 9472.931 - 9532.509: 26.7895% ( 103) 00:08:58.351 9532.509 - 9592.087: 27.6100% ( 94) 00:08:58.351 9592.087 - 9651.665: 28.3432% ( 84) 00:08:58.351 9651.665 - 9711.244: 28.9892% ( 74) 00:08:58.351 9711.244 - 9770.822: 29.5042% ( 59) 00:08:58.351 9770.822 - 9830.400: 30.0454% ( 62) 00:08:58.351 9830.400 - 9889.978: 30.4906% ( 51) 00:08:58.351 9889.978 - 9949.556: 31.0056% ( 59) 00:08:58.351 9949.556 - 10009.135: 31.4944% ( 56) 00:08:58.351 10009.135 - 10068.713: 32.0007% ( 58) 00:08:58.351 10068.713 - 10128.291: 32.5157% ( 59) 00:08:58.351 10128.291 - 10187.869: 33.0395% ( 60) 00:08:58.351 10187.869 - 10247.447: 33.5545% ( 59) 00:08:58.351 10247.447 - 10307.025: 33.9997% ( 51) 00:08:58.351 10307.025 - 10366.604: 34.4099% ( 47) 00:08:58.351 10366.604 - 10426.182: 34.8464% ( 50) 00:08:58.351 10426.182 - 10485.760: 35.3701% ( 60) 00:08:58.351 10485.760 - 10545.338: 36.1034% ( 84) 00:08:58.351 10545.338 - 10604.916: 36.9937% ( 102) 00:08:58.351 10604.916 - 10664.495: 38.0587% ( 122) 00:08:58.351 10664.495 - 10724.073: 39.3244% ( 145) 00:08:58.351 10724.073 - 10783.651: 40.7559% ( 164) 00:08:58.351 10783.651 - 10843.229: 42.3621% ( 184) 00:08:58.351 10843.229 - 10902.807: 44.0555% ( 194) 00:08:58.351 10902.807 - 10962.385: 45.8886% ( 210) 00:08:58.351 10962.385 - 11021.964: 47.7916% ( 218) 00:08:58.351 11021.964 - 11081.542: 49.8167% ( 232) 00:08:58.351 11081.542 - 11141.120: 51.9204% ( 241) 00:08:58.351 11141.120 - 11200.698: 53.9455% ( 232) 00:08:58.351 11200.698 - 11260.276: 56.0492% ( 241) 00:08:58.351 11260.276 - 11319.855: 58.0918% ( 234) 00:08:58.351 11319.855 - 11379.433: 60.1519% ( 236) 00:08:58.351 11379.433 - 11439.011: 62.1770% ( 232) 00:08:58.351 11439.011 - 11498.589: 64.1934% ( 231) 00:08:58.351 11498.589 - 11558.167: 66.3757% ( 250) 00:08:58.351 11558.167 - 11617.745: 68.4619% ( 239) 00:08:58.351 11617.745 - 11677.324: 70.4347% ( 226) 00:08:58.351 11677.324 - 11736.902: 72.3289% ( 217) 00:08:58.351 11736.902 - 11796.480: 74.0223% ( 194) 00:08:58.351 11796.480 - 11856.058: 75.4452% ( 163) 00:08:58.351 11856.058 - 11915.636: 76.7895% ( 154) 00:08:58.351 11915.636 - 11975.215: 78.0290% ( 142) 00:08:58.351 11975.215 - 12034.793: 79.0154% ( 113) 00:08:58.351 12034.793 - 12094.371: 79.7922% ( 89) 00:08:58.351 12094.371 - 12153.949: 80.4906% ( 80) 00:08:58.351 12153.949 - 12213.527: 81.0318% ( 62) 00:08:58.351 12213.527 - 12273.105: 81.5381% ( 58) 00:08:58.351 12273.105 - 12332.684: 81.9309% ( 45) 00:08:58.351 12332.684 - 12392.262: 82.2888% ( 41) 00:08:58.351 12392.262 - 12451.840: 82.6292% ( 39) 00:08:58.351 12451.840 - 12511.418: 82.9958% ( 42) 00:08:58.351 12511.418 - 12570.996: 83.3624% ( 42) 00:08:58.351 12570.996 - 12630.575: 83.7291% ( 42) 00:08:58.351 12630.575 - 12690.153: 84.1306% ( 46) 00:08:58.351 12690.153 - 12749.731: 84.5234% ( 45) 00:08:58.351 12749.731 - 12809.309: 84.8987% ( 43) 00:08:58.351 12809.309 - 12868.887: 85.3003% ( 46) 00:08:58.351 12868.887 - 12928.465: 85.7018% ( 46) 00:08:58.351 12928.465 - 12988.044: 86.1121% ( 47) 00:08:58.351 12988.044 - 13047.622: 86.5573% ( 51) 00:08:58.351 13047.622 - 13107.200: 86.9850% ( 49) 00:08:58.351 13107.200 - 13166.778: 87.4738% ( 56) 00:08:58.351 13166.778 - 13226.356: 87.9976% ( 60) 00:08:58.351 13226.356 - 13285.935: 88.5213% ( 60) 00:08:58.351 13285.935 - 13345.513: 89.0887% ( 65) 00:08:58.351 13345.513 - 13405.091: 89.6299% ( 62) 00:08:58.351 13405.091 - 13464.669: 90.1274% ( 57) 00:08:58.351 13464.669 - 13524.247: 90.6774% ( 63) 00:08:58.351 13524.247 - 13583.825: 91.2448% ( 65) 00:08:58.351 13583.825 - 13643.404: 91.7249% ( 55) 00:08:58.351 13643.404 - 13702.982: 92.2311% ( 58) 00:08:58.351 13702.982 - 13762.560: 92.6938% ( 53) 00:08:58.351 13762.560 - 13822.138: 93.1128% ( 48) 00:08:58.351 13822.138 - 13881.716: 93.5667% ( 52) 00:08:58.351 13881.716 - 13941.295: 93.9508% ( 44) 00:08:58.351 13941.295 - 14000.873: 94.3261% ( 43) 00:08:58.351 14000.873 - 14060.451: 94.7102% ( 44) 00:08:58.351 14060.451 - 14120.029: 95.0506% ( 39) 00:08:58.351 14120.029 - 14179.607: 95.3212% ( 31) 00:08:58.351 14179.607 - 14239.185: 95.5656% ( 28) 00:08:58.351 14239.185 - 14298.764: 95.7751% ( 24) 00:08:58.351 14298.764 - 14358.342: 95.9672% ( 22) 00:08:58.351 14358.342 - 14417.920: 96.1156% ( 17) 00:08:58.351 14417.920 - 14477.498: 96.2378% ( 14) 00:08:58.351 14477.498 - 14537.076: 96.3425% ( 12) 00:08:58.351 14537.076 - 14596.655: 96.4211% ( 9) 00:08:58.351 14596.655 - 14656.233: 96.4997% ( 9) 00:08:58.351 14656.233 - 14715.811: 96.5695% ( 8) 00:08:58.351 14715.811 - 14775.389: 96.6480% ( 9) 00:08:58.351 14775.389 - 14834.967: 96.7091% ( 7) 00:08:58.351 14834.967 - 14894.545: 96.7615% ( 6) 00:08:58.351 14894.545 - 14954.124: 96.7790% ( 2) 00:08:58.351 14954.124 - 15013.702: 96.8052% ( 3) 00:08:58.351 15013.702 - 15073.280: 96.8139% ( 1) 00:08:58.351 15073.280 - 15132.858: 96.8401% ( 3) 00:08:58.351 15132.858 - 15192.436: 96.8663% ( 3) 00:08:58.351 15192.436 - 15252.015: 96.9012% ( 4) 00:08:58.351 15252.015 - 15371.171: 96.9623% ( 7) 00:08:58.351 15371.171 - 15490.327: 97.0496% ( 10) 00:08:58.351 15490.327 - 15609.484: 97.1194% ( 8) 00:08:58.351 15609.484 - 15728.640: 97.2242% ( 12) 00:08:58.351 15728.640 - 15847.796: 97.3464% ( 14) 00:08:58.351 15847.796 - 15966.953: 97.5384% ( 22) 00:08:58.351 15966.953 - 16086.109: 97.7392% ( 23) 00:08:58.351 16086.109 - 16205.265: 97.9312% ( 22) 00:08:58.351 16205.265 - 16324.422: 98.0971% ( 19) 00:08:58.351 16324.422 - 16443.578: 98.2629% ( 19) 00:08:58.351 16443.578 - 16562.735: 98.4026% ( 16) 00:08:58.351 16562.735 - 16681.891: 98.5772% ( 20) 00:08:58.351 16681.891 - 16801.047: 98.7343% ( 18) 00:08:58.351 16801.047 - 16920.204: 98.9001% ( 19) 00:08:58.351 16920.204 - 17039.360: 99.0311% ( 15) 00:08:58.351 17039.360 - 17158.516: 99.1184% ( 10) 00:08:58.351 17158.516 - 17277.673: 99.1446% ( 3) 00:08:58.351 17277.673 - 17396.829: 99.1707% ( 3) 00:08:58.351 17396.829 - 17515.985: 99.1882% ( 2) 00:08:58.351 17515.985 - 17635.142: 99.2144% ( 3) 00:08:58.351 17635.142 - 17754.298: 99.2406% ( 3) 00:08:58.351 17754.298 - 17873.455: 99.2580% ( 2) 00:08:58.351 17873.455 - 17992.611: 99.2842% ( 3) 00:08:58.351 17992.611 - 18111.767: 99.3104% ( 3) 00:08:58.351 18111.767 - 18230.924: 99.3279% ( 2) 00:08:58.351 18230.924 - 18350.080: 99.3541% ( 3) 00:08:58.351 18350.080 - 18469.236: 99.3802% ( 3) 00:08:58.351 18469.236 - 18588.393: 99.4064% ( 3) 00:08:58.351 18588.393 - 18707.549: 99.4239% ( 2) 00:08:58.351 18707.549 - 18826.705: 99.4413% ( 2) 00:08:58.351 25261.149 - 25380.305: 99.4588% ( 2) 00:08:58.351 25380.305 - 25499.462: 99.4850% ( 3) 00:08:58.351 25499.462 - 25618.618: 99.5024% ( 2) 00:08:58.351 25618.618 - 25737.775: 99.5286% ( 3) 00:08:58.351 25737.775 - 25856.931: 99.5548% ( 3) 00:08:58.351 25856.931 - 25976.087: 99.5810% ( 3) 00:08:58.351 25976.087 - 26095.244: 99.5985% ( 2) 00:08:58.351 26095.244 - 26214.400: 99.6159% ( 2) 00:08:58.351 26214.400 - 26333.556: 99.6421% ( 3) 00:08:58.351 26333.556 - 26452.713: 99.6683% ( 3) 00:08:58.351 26452.713 - 26571.869: 99.6945% ( 3) 00:08:58.351 26571.869 - 26691.025: 99.7119% ( 2) 00:08:58.351 26691.025 - 26810.182: 99.7381% ( 3) 00:08:58.351 26810.182 - 26929.338: 99.7556% ( 2) 00:08:58.351 26929.338 - 27048.495: 99.7730% ( 2) 00:08:58.351 27048.495 - 27167.651: 99.7992% ( 3) 00:08:58.351 27167.651 - 27286.807: 99.8254% ( 3) 00:08:58.351 27286.807 - 27405.964: 99.8429% ( 2) 00:08:58.351 27405.964 - 27525.120: 99.8691% ( 3) 00:08:58.351 27525.120 - 27644.276: 99.8865% ( 2) 00:08:58.351 27644.276 - 27763.433: 99.9127% ( 3) 00:08:58.351 27763.433 - 27882.589: 99.9302% ( 2) 00:08:58.351 27882.589 - 28001.745: 99.9564% ( 3) 00:08:58.351 28001.745 - 28120.902: 99.9825% ( 3) 00:08:58.351 28120.902 - 28240.058: 100.0000% ( 2) 00:08:58.351 00:08:58.351 07:49:00 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:59.729 Initializing NVMe Controllers 00:08:59.729 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.729 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.729 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.729 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.729 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:59.729 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:59.729 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:59.729 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:59.729 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:59.729 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:59.729 Initialization complete. Launching workers. 00:08:59.729 ======================================================== 00:08:59.729 Latency(us) 00:08:59.729 Device Information : IOPS MiB/s Average min max 00:08:59.729 PCIE (0000:00:10.0) NSID 1 from core 0: 9438.64 110.61 13605.10 9677.86 42651.73 00:08:59.729 PCIE (0000:00:11.0) NSID 1 from core 0: 9438.64 110.61 13585.78 9708.37 40586.80 00:08:59.729 PCIE (0000:00:13.0) NSID 1 from core 0: 9438.64 110.61 13565.65 9703.65 39247.09 00:08:59.729 PCIE (0000:00:12.0) NSID 1 from core 0: 9438.64 110.61 13545.25 9566.59 37122.36 00:08:59.729 PCIE (0000:00:12.0) NSID 2 from core 0: 9438.64 110.61 13525.39 9670.28 35107.69 00:08:59.729 PCIE (0000:00:12.0) NSID 3 from core 0: 9438.64 110.61 13506.03 9730.93 33146.39 00:08:59.729 ======================================================== 00:08:59.729 Total : 56631.86 663.65 13555.53 9566.59 42651.73 00:08:59.729 00:08:59.729 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:59.729 ================================================================================= 00:08:59.729 1.00000% : 10247.447us 00:08:59.729 10.00000% : 11319.855us 00:08:59.729 25.00000% : 11796.480us 00:08:59.729 50.00000% : 12570.996us 00:08:59.729 75.00000% : 13822.138us 00:08:59.729 90.00000% : 17873.455us 00:08:59.729 95.00000% : 20018.269us 00:08:59.729 98.00000% : 23235.491us 00:08:59.729 99.00000% : 32648.844us 00:08:59.729 99.50000% : 40751.476us 00:08:59.729 99.90000% : 42419.665us 00:08:59.729 99.99000% : 42657.978us 00:08:59.729 99.99900% : 42657.978us 00:08:59.729 99.99990% : 42657.978us 00:08:59.729 99.99999% : 42657.978us 00:08:59.729 00:08:59.729 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:59.729 ================================================================================= 00:08:59.729 1.00000% : 10485.760us 00:08:59.729 10.00000% : 11379.433us 00:08:59.729 25.00000% : 11796.480us 00:08:59.729 50.00000% : 12451.840us 00:08:59.729 75.00000% : 13762.560us 00:08:59.729 90.00000% : 18111.767us 00:08:59.729 95.00000% : 20018.269us 00:08:59.729 98.00000% : 23950.429us 00:08:59.729 99.00000% : 31457.280us 00:08:59.729 99.50000% : 39083.287us 00:08:59.729 99.90000% : 40513.164us 00:08:59.729 99.99000% : 40751.476us 00:08:59.729 99.99900% : 40751.476us 00:08:59.729 99.99990% : 40751.476us 00:08:59.729 99.99999% : 40751.476us 00:08:59.729 00:08:59.729 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:59.729 ================================================================================= 00:08:59.729 1.00000% : 10426.182us 00:08:59.729 10.00000% : 11439.011us 00:08:59.729 25.00000% : 11856.058us 00:08:59.729 50.00000% : 12511.418us 00:08:59.729 75.00000% : 13822.138us 00:08:59.729 90.00000% : 17992.611us 00:08:59.729 95.00000% : 19660.800us 00:08:59.729 98.00000% : 22878.022us 00:08:59.729 99.00000% : 29908.247us 00:08:59.729 99.50000% : 37653.411us 00:08:59.729 99.90000% : 39083.287us 00:08:59.729 99.99000% : 39321.600us 00:08:59.729 99.99900% : 39321.600us 00:08:59.729 99.99990% : 39321.600us 00:08:59.729 99.99999% : 39321.600us 00:08:59.729 00:08:59.729 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:59.729 ================================================================================= 00:08:59.729 1.00000% : 10366.604us 00:08:59.729 10.00000% : 11379.433us 00:08:59.729 25.00000% : 11856.058us 00:08:59.729 50.00000% : 12511.418us 00:08:59.729 75.00000% : 13822.138us 00:08:59.729 90.00000% : 18230.924us 00:08:59.729 95.00000% : 19779.956us 00:08:59.729 98.00000% : 22282.240us 00:08:59.729 99.00000% : 27882.589us 00:08:59.729 99.50000% : 35746.909us 00:08:59.729 99.90000% : 36938.473us 00:08:59.729 99.99000% : 37176.785us 00:08:59.729 99.99900% : 37176.785us 00:08:59.729 99.99990% : 37176.785us 00:08:59.729 99.99999% : 37176.785us 00:08:59.729 00:08:59.729 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:59.729 ================================================================================= 00:08:59.729 1.00000% : 10366.604us 00:08:59.729 10.00000% : 11439.011us 00:08:59.729 25.00000% : 11856.058us 00:08:59.729 50.00000% : 12511.418us 00:08:59.729 75.00000% : 13881.716us 00:08:59.729 90.00000% : 18111.767us 00:08:59.729 95.00000% : 20137.425us 00:08:59.729 98.00000% : 21448.145us 00:08:59.729 99.00000% : 26333.556us 00:08:59.729 99.50000% : 33602.095us 00:08:59.729 99.90000% : 35031.971us 00:08:59.729 99.99000% : 35270.284us 00:08:59.729 99.99900% : 35270.284us 00:08:59.729 99.99990% : 35270.284us 00:08:59.729 99.99999% : 35270.284us 00:08:59.729 00:08:59.729 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:59.729 ================================================================================= 00:08:59.729 1.00000% : 10307.025us 00:08:59.729 10.00000% : 11439.011us 00:08:59.729 25.00000% : 11856.058us 00:08:59.729 50.00000% : 12511.418us 00:08:59.729 75.00000% : 13822.138us 00:08:59.729 90.00000% : 18111.767us 00:08:59.729 95.00000% : 20018.269us 00:08:59.729 98.00000% : 23235.491us 00:08:59.729 99.00000% : 24665.367us 00:08:59.729 99.50000% : 31695.593us 00:08:59.729 99.90000% : 32887.156us 00:08:59.729 99.99000% : 33363.782us 00:08:59.729 99.99900% : 33363.782us 00:08:59.729 99.99990% : 33363.782us 00:08:59.729 99.99999% : 33363.782us 00:08:59.729 00:08:59.729 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:59.729 ============================================================================== 00:08:59.729 Range in us Cumulative IO count 00:08:59.729 9651.665 - 9711.244: 0.0528% ( 5) 00:08:59.729 9711.244 - 9770.822: 0.1795% ( 12) 00:08:59.729 9770.822 - 9830.400: 0.3590% ( 17) 00:08:59.729 9830.400 - 9889.978: 0.4645% ( 10) 00:08:59.729 9889.978 - 9949.556: 0.5701% ( 10) 00:08:59.729 9949.556 - 10009.135: 0.6440% ( 7) 00:08:59.729 10009.135 - 10068.713: 0.7073% ( 6) 00:08:59.729 10068.713 - 10128.291: 0.7601% ( 5) 00:08:59.729 10128.291 - 10187.869: 0.8552% ( 9) 00:08:59.729 10187.869 - 10247.447: 1.0135% ( 15) 00:08:59.729 10247.447 - 10307.025: 1.1824% ( 16) 00:08:59.729 10307.025 - 10366.604: 1.3936% ( 20) 00:08:59.729 10366.604 - 10426.182: 1.5836% ( 18) 00:08:59.729 10426.182 - 10485.760: 1.8476% ( 25) 00:08:59.729 10485.760 - 10545.338: 2.0798% ( 22) 00:08:59.729 10545.338 - 10604.916: 2.3121% ( 22) 00:08:59.729 10604.916 - 10664.495: 2.6605% ( 33) 00:08:59.729 10664.495 - 10724.073: 3.0300% ( 35) 00:08:59.729 10724.073 - 10783.651: 3.3256% ( 28) 00:08:59.729 10783.651 - 10843.229: 3.8429% ( 49) 00:08:59.729 10843.229 - 10902.807: 4.5186% ( 64) 00:08:59.729 10902.807 - 10962.385: 4.9831% ( 44) 00:08:59.729 10962.385 - 11021.964: 5.6166% ( 60) 00:08:59.729 11021.964 - 11081.542: 6.1761% ( 53) 00:08:59.729 11081.542 - 11141.120: 6.9257% ( 71) 00:08:59.729 11141.120 - 11200.698: 7.8653% ( 89) 00:08:59.729 11200.698 - 11260.276: 9.1533% ( 122) 00:08:59.729 11260.276 - 11319.855: 10.6524% ( 142) 00:08:59.729 11319.855 - 11379.433: 12.1199% ( 139) 00:08:59.729 11379.433 - 11439.011: 13.8302% ( 162) 00:08:59.729 11439.011 - 11498.589: 15.7834% ( 185) 00:08:59.729 11498.589 - 11558.167: 17.6309% ( 175) 00:08:59.729 11558.167 - 11617.745: 19.4046% ( 168) 00:08:59.729 11617.745 - 11677.324: 21.3999% ( 189) 00:08:59.729 11677.324 - 11736.902: 23.4375% ( 193) 00:08:59.729 11736.902 - 11796.480: 25.4434% ( 190) 00:08:59.729 11796.480 - 11856.058: 27.3754% ( 183) 00:08:59.729 11856.058 - 11915.636: 29.4447% ( 196) 00:08:59.729 11915.636 - 11975.215: 31.4295% ( 188) 00:08:59.729 11975.215 - 12034.793: 33.4143% ( 188) 00:08:59.729 12034.793 - 12094.371: 35.4307% ( 191) 00:08:59.729 12094.371 - 12153.949: 37.3839% ( 185) 00:08:59.729 12153.949 - 12213.527: 39.2948% ( 181) 00:08:59.729 12213.527 - 12273.105: 41.1845% ( 179) 00:08:59.729 12273.105 - 12332.684: 43.1166% ( 183) 00:08:59.729 12332.684 - 12392.262: 45.2914% ( 206) 00:08:59.729 12392.262 - 12451.840: 47.2656% ( 187) 00:08:59.729 12451.840 - 12511.418: 49.1871% ( 182) 00:08:59.729 12511.418 - 12570.996: 50.9185% ( 164) 00:08:59.729 12570.996 - 12630.575: 52.4810% ( 148) 00:08:59.729 12630.575 - 12690.153: 54.0752% ( 151) 00:08:59.729 12690.153 - 12749.731: 55.5321% ( 138) 00:08:59.729 12749.731 - 12809.309: 56.7990% ( 120) 00:08:59.729 12809.309 - 12868.887: 58.2137% ( 134) 00:08:59.729 12868.887 - 12928.465: 59.3961% ( 112) 00:08:59.729 12928.465 - 12988.044: 60.6947% ( 123) 00:08:59.729 12988.044 - 13047.622: 61.8560% ( 110) 00:08:59.729 13047.622 - 13107.200: 62.9223% ( 101) 00:08:59.729 13107.200 - 13166.778: 63.9358% ( 96) 00:08:59.729 13166.778 - 13226.356: 64.9810% ( 99) 00:08:59.729 13226.356 - 13285.935: 66.0262% ( 99) 00:08:59.729 13285.935 - 13345.513: 67.2297% ( 114) 00:08:59.729 13345.513 - 13405.091: 68.2432% ( 96) 00:08:59.730 13405.091 - 13464.669: 69.2884% ( 99) 00:08:59.730 13464.669 - 13524.247: 70.3547% ( 101) 00:08:59.730 13524.247 - 13583.825: 71.4738% ( 106) 00:08:59.730 13583.825 - 13643.404: 72.4979% ( 97) 00:08:59.730 13643.404 - 13702.982: 73.5642% ( 101) 00:08:59.730 13702.982 - 13762.560: 74.5460% ( 93) 00:08:59.730 13762.560 - 13822.138: 75.4962% ( 90) 00:08:59.730 13822.138 - 13881.716: 76.4358% ( 89) 00:08:59.730 13881.716 - 13941.295: 77.3332% ( 85) 00:08:59.730 13941.295 - 14000.873: 78.1778% ( 80) 00:08:59.730 14000.873 - 14060.451: 78.9379% ( 72) 00:08:59.730 14060.451 - 14120.029: 79.6242% ( 65) 00:08:59.730 14120.029 - 14179.607: 80.2787% ( 62) 00:08:59.730 14179.607 - 14239.185: 80.9649% ( 65) 00:08:59.730 14239.185 - 14298.764: 81.4506% ( 46) 00:08:59.730 14298.764 - 14358.342: 81.9151% ( 44) 00:08:59.730 14358.342 - 14417.920: 82.3796% ( 44) 00:08:59.730 14417.920 - 14477.498: 82.7386% ( 34) 00:08:59.730 14477.498 - 14537.076: 83.0659% ( 31) 00:08:59.730 14537.076 - 14596.655: 83.4248% ( 34) 00:08:59.730 14596.655 - 14656.233: 83.6571% ( 22) 00:08:59.730 14656.233 - 14715.811: 83.9527% ( 28) 00:08:59.730 14715.811 - 14775.389: 84.2166% ( 25) 00:08:59.730 14775.389 - 14834.967: 84.4806% ( 25) 00:08:59.730 14834.967 - 14894.545: 84.7656% ( 27) 00:08:59.730 14894.545 - 14954.124: 85.0612% ( 28) 00:08:59.730 14954.124 - 15013.702: 85.2618% ( 19) 00:08:59.730 15013.702 - 15073.280: 85.3780% ( 11) 00:08:59.730 15073.280 - 15132.858: 85.4519% ( 7) 00:08:59.730 15132.858 - 15192.436: 85.5997% ( 14) 00:08:59.730 15192.436 - 15252.015: 85.7052% ( 10) 00:08:59.730 15252.015 - 15371.171: 85.9692% ( 25) 00:08:59.730 15371.171 - 15490.327: 86.2226% ( 24) 00:08:59.730 15490.327 - 15609.484: 86.4548% ( 22) 00:08:59.730 15609.484 - 15728.640: 86.6660% ( 20) 00:08:59.730 15728.640 - 15847.796: 86.8560% ( 18) 00:08:59.730 15847.796 - 15966.953: 87.0988% ( 23) 00:08:59.730 15966.953 - 16086.109: 87.3205% ( 21) 00:08:59.730 16086.109 - 16205.265: 87.5000% ( 17) 00:08:59.730 16205.265 - 16324.422: 87.6478% ( 14) 00:08:59.730 16324.422 - 16443.578: 87.7745% ( 12) 00:08:59.730 16443.578 - 16562.735: 87.9012% ( 12) 00:08:59.730 16562.735 - 16681.891: 88.0807% ( 17) 00:08:59.730 16681.891 - 16801.047: 88.1968% ( 11) 00:08:59.730 16801.047 - 16920.204: 88.3657% ( 16) 00:08:59.730 16920.204 - 17039.360: 88.5135% ( 14) 00:08:59.730 17039.360 - 17158.516: 88.6402% ( 12) 00:08:59.730 17158.516 - 17277.673: 88.7986% ( 15) 00:08:59.730 17277.673 - 17396.829: 88.9886% ( 18) 00:08:59.730 17396.829 - 17515.985: 89.1786% ( 18) 00:08:59.730 17515.985 - 17635.142: 89.3792% ( 19) 00:08:59.730 17635.142 - 17754.298: 89.7804% ( 38) 00:08:59.730 17754.298 - 17873.455: 90.2344% ( 43) 00:08:59.730 17873.455 - 17992.611: 90.5194% ( 27) 00:08:59.730 17992.611 - 18111.767: 90.8678% ( 33) 00:08:59.730 18111.767 - 18230.924: 91.0790% ( 20) 00:08:59.730 18230.924 - 18350.080: 91.3218% ( 23) 00:08:59.730 18350.080 - 18469.236: 91.5329% ( 20) 00:08:59.730 18469.236 - 18588.393: 91.7652% ( 22) 00:08:59.730 18588.393 - 18707.549: 92.0608% ( 28) 00:08:59.730 18707.549 - 18826.705: 92.3459% ( 27) 00:08:59.730 18826.705 - 18945.862: 92.6309% ( 27) 00:08:59.730 18945.862 - 19065.018: 92.8737% ( 23) 00:08:59.730 19065.018 - 19184.175: 93.1799% ( 29) 00:08:59.730 19184.175 - 19303.331: 93.4544% ( 26) 00:08:59.730 19303.331 - 19422.487: 93.6867% ( 22) 00:08:59.730 19422.487 - 19541.644: 93.9823% ( 28) 00:08:59.730 19541.644 - 19660.800: 94.2356% ( 24) 00:08:59.730 19660.800 - 19779.956: 94.5312% ( 28) 00:08:59.730 19779.956 - 19899.113: 94.7635% ( 22) 00:08:59.730 19899.113 - 20018.269: 95.0169% ( 24) 00:08:59.730 20018.269 - 20137.425: 95.2914% ( 26) 00:08:59.730 20137.425 - 20256.582: 95.4603% ( 16) 00:08:59.730 20256.582 - 20375.738: 95.6398% ( 17) 00:08:59.730 20375.738 - 20494.895: 95.8193% ( 17) 00:08:59.730 20494.895 - 20614.051: 96.0304% ( 20) 00:08:59.730 20614.051 - 20733.207: 96.3049% ( 26) 00:08:59.730 20733.207 - 20852.364: 96.5055% ( 19) 00:08:59.730 20852.364 - 20971.520: 96.7483% ( 23) 00:08:59.730 20971.520 - 21090.676: 96.9489% ( 19) 00:08:59.730 21090.676 - 21209.833: 97.0650% ( 11) 00:08:59.730 21209.833 - 21328.989: 97.1389% ( 7) 00:08:59.730 21328.989 - 21448.145: 97.1917% ( 5) 00:08:59.730 21448.145 - 21567.302: 97.2445% ( 5) 00:08:59.730 21567.302 - 21686.458: 97.2973% ( 5) 00:08:59.730 21686.458 - 21805.615: 97.3395% ( 4) 00:08:59.730 21805.615 - 21924.771: 97.4029% ( 6) 00:08:59.730 21924.771 - 22043.927: 97.4557% ( 5) 00:08:59.730 22043.927 - 22163.084: 97.5084% ( 5) 00:08:59.730 22163.084 - 22282.240: 97.5612% ( 5) 00:08:59.730 22282.240 - 22401.396: 97.6140% ( 5) 00:08:59.730 22401.396 - 22520.553: 97.6774% ( 6) 00:08:59.730 22520.553 - 22639.709: 97.7302% ( 5) 00:08:59.730 22639.709 - 22758.865: 97.7618% ( 3) 00:08:59.730 22758.865 - 22878.022: 97.8357% ( 7) 00:08:59.730 22878.022 - 22997.178: 97.8885% ( 5) 00:08:59.730 22997.178 - 23116.335: 97.9519% ( 6) 00:08:59.730 23116.335 - 23235.491: 98.0152% ( 6) 00:08:59.730 23235.491 - 23354.647: 98.0891% ( 7) 00:08:59.730 23354.647 - 23473.804: 98.1630% ( 7) 00:08:59.730 23473.804 - 23592.960: 98.2158% ( 5) 00:08:59.730 23592.960 - 23712.116: 98.2791% ( 6) 00:08:59.730 23712.116 - 23831.273: 98.3319% ( 5) 00:08:59.730 23831.273 - 23950.429: 98.3847% ( 5) 00:08:59.730 23950.429 - 24069.585: 98.4586% ( 7) 00:08:59.730 24069.585 - 24188.742: 98.5220% ( 6) 00:08:59.730 24188.742 - 24307.898: 98.5747% ( 5) 00:08:59.730 24307.898 - 24427.055: 98.6275% ( 5) 00:08:59.730 24427.055 - 24546.211: 98.6486% ( 2) 00:08:59.730 31218.967 - 31457.280: 98.6803% ( 3) 00:08:59.730 31457.280 - 31695.593: 98.7542% ( 7) 00:08:59.730 31695.593 - 31933.905: 98.8281% ( 7) 00:08:59.730 31933.905 - 32172.218: 98.8915% ( 6) 00:08:59.730 32172.218 - 32410.531: 98.9654% ( 7) 00:08:59.730 32410.531 - 32648.844: 99.0393% ( 7) 00:08:59.730 32648.844 - 32887.156: 99.1132% ( 7) 00:08:59.730 32887.156 - 33125.469: 99.1765% ( 6) 00:08:59.730 33125.469 - 33363.782: 99.2399% ( 6) 00:08:59.730 33363.782 - 33602.095: 99.3243% ( 8) 00:08:59.730 40036.538 - 40274.851: 99.3666% ( 4) 00:08:59.730 40274.851 - 40513.164: 99.4299% ( 6) 00:08:59.730 40513.164 - 40751.476: 99.5038% ( 7) 00:08:59.730 40751.476 - 40989.789: 99.5566% ( 5) 00:08:59.730 40989.789 - 41228.102: 99.6199% ( 6) 00:08:59.730 41228.102 - 41466.415: 99.6833% ( 6) 00:08:59.730 41466.415 - 41704.727: 99.7466% ( 6) 00:08:59.730 41704.727 - 41943.040: 99.7994% ( 5) 00:08:59.730 41943.040 - 42181.353: 99.8839% ( 8) 00:08:59.730 42181.353 - 42419.665: 99.9472% ( 6) 00:08:59.730 42419.665 - 42657.978: 100.0000% ( 5) 00:08:59.730 00:08:59.730 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:59.730 ============================================================================== 00:08:59.730 Range in us Cumulative IO count 00:08:59.730 9651.665 - 9711.244: 0.0106% ( 1) 00:08:59.730 9711.244 - 9770.822: 0.0633% ( 5) 00:08:59.730 9770.822 - 9830.400: 0.1056% ( 4) 00:08:59.730 9830.400 - 9889.978: 0.1372% ( 3) 00:08:59.730 9889.978 - 9949.556: 0.1795% ( 4) 00:08:59.730 9949.556 - 10009.135: 0.2217% ( 4) 00:08:59.730 10009.135 - 10068.713: 0.3062% ( 8) 00:08:59.730 10068.713 - 10128.291: 0.4223% ( 11) 00:08:59.730 10128.291 - 10187.869: 0.5384% ( 11) 00:08:59.730 10187.869 - 10247.447: 0.6334% ( 9) 00:08:59.730 10247.447 - 10307.025: 0.7285% ( 9) 00:08:59.730 10307.025 - 10366.604: 0.8340% ( 10) 00:08:59.730 10366.604 - 10426.182: 0.9713% ( 13) 00:08:59.730 10426.182 - 10485.760: 1.1402% ( 16) 00:08:59.730 10485.760 - 10545.338: 1.3408% ( 19) 00:08:59.730 10545.338 - 10604.916: 1.5731% ( 22) 00:08:59.730 10604.916 - 10664.495: 1.9215% ( 33) 00:08:59.730 10664.495 - 10724.073: 2.4282% ( 48) 00:08:59.730 10724.073 - 10783.651: 2.7660% ( 32) 00:08:59.730 10783.651 - 10843.229: 3.1778% ( 39) 00:08:59.730 10843.229 - 10902.807: 3.7479% ( 54) 00:08:59.730 10902.807 - 10962.385: 4.3391% ( 56) 00:08:59.730 10962.385 - 11021.964: 5.0992% ( 72) 00:08:59.730 11021.964 - 11081.542: 5.7432% ( 61) 00:08:59.730 11081.542 - 11141.120: 6.5562% ( 77) 00:08:59.730 11141.120 - 11200.698: 7.5274% ( 92) 00:08:59.730 11200.698 - 11260.276: 8.5726% ( 99) 00:08:59.730 11260.276 - 11319.855: 9.8290% ( 119) 00:08:59.730 11319.855 - 11379.433: 11.3387% ( 143) 00:08:59.730 11379.433 - 11439.011: 12.9751% ( 155) 00:08:59.730 11439.011 - 11498.589: 14.8015% ( 173) 00:08:59.730 11498.589 - 11558.167: 16.7652% ( 186) 00:08:59.730 11558.167 - 11617.745: 18.8133% ( 194) 00:08:59.730 11617.745 - 11677.324: 20.8404% ( 192) 00:08:59.730 11677.324 - 11736.902: 23.0785% ( 212) 00:08:59.730 11736.902 - 11796.480: 25.3062% ( 211) 00:08:59.730 11796.480 - 11856.058: 27.7555% ( 232) 00:08:59.730 11856.058 - 11915.636: 30.1626% ( 228) 00:08:59.730 11915.636 - 11975.215: 32.5063% ( 222) 00:08:59.730 11975.215 - 12034.793: 34.7762% ( 215) 00:08:59.730 12034.793 - 12094.371: 36.8982% ( 201) 00:08:59.730 12094.371 - 12153.949: 39.2842% ( 226) 00:08:59.730 12153.949 - 12213.527: 41.6174% ( 221) 00:08:59.730 12213.527 - 12273.105: 43.8345% ( 210) 00:08:59.730 12273.105 - 12332.684: 45.9776% ( 203) 00:08:59.730 12332.684 - 12392.262: 48.2791% ( 218) 00:08:59.730 12392.262 - 12451.840: 50.2428% ( 186) 00:08:59.730 12451.840 - 12511.418: 51.9109% ( 158) 00:08:59.730 12511.418 - 12570.996: 53.3256% ( 134) 00:08:59.730 12570.996 - 12630.575: 54.8247% ( 142) 00:08:59.730 12630.575 - 12690.153: 56.3239% ( 142) 00:08:59.730 12690.153 - 12749.731: 57.6330% ( 124) 00:08:59.730 12749.731 - 12809.309: 58.7310% ( 104) 00:08:59.730 12809.309 - 12868.887: 59.7128% ( 93) 00:08:59.730 12868.887 - 12928.465: 60.6841% ( 92) 00:08:59.730 12928.465 - 12988.044: 61.6660% ( 93) 00:08:59.730 12988.044 - 13047.622: 62.6795% ( 96) 00:08:59.730 13047.622 - 13107.200: 63.8091% ( 107) 00:08:59.730 13107.200 - 13166.778: 64.8649% ( 100) 00:08:59.730 13166.778 - 13226.356: 65.9945% ( 107) 00:08:59.730 13226.356 - 13285.935: 67.1769% ( 112) 00:08:59.730 13285.935 - 13345.513: 68.3277% ( 109) 00:08:59.730 13345.513 - 13405.091: 69.3518% ( 97) 00:08:59.730 13405.091 - 13464.669: 70.3970% ( 99) 00:08:59.731 13464.669 - 13524.247: 71.3682% ( 92) 00:08:59.731 13524.247 - 13583.825: 72.3501% ( 93) 00:08:59.731 13583.825 - 13643.404: 73.3214% ( 92) 00:08:59.731 13643.404 - 13702.982: 74.3454% ( 97) 00:08:59.731 13702.982 - 13762.560: 75.2111% ( 82) 00:08:59.731 13762.560 - 13822.138: 76.1402% ( 88) 00:08:59.731 13822.138 - 13881.716: 77.0587% ( 87) 00:08:59.731 13881.716 - 13941.295: 77.8611% ( 76) 00:08:59.731 13941.295 - 14000.873: 78.6001% ( 70) 00:08:59.731 14000.873 - 14060.451: 79.3602% ( 72) 00:08:59.731 14060.451 - 14120.029: 80.0148% ( 62) 00:08:59.731 14120.029 - 14179.607: 80.6905% ( 64) 00:08:59.731 14179.607 - 14239.185: 81.2500% ( 53) 00:08:59.731 14239.185 - 14298.764: 81.7673% ( 49) 00:08:59.731 14298.764 - 14358.342: 82.2530% ( 46) 00:08:59.731 14358.342 - 14417.920: 82.6436% ( 37) 00:08:59.731 14417.920 - 14477.498: 82.9075% ( 25) 00:08:59.731 14477.498 - 14537.076: 83.2031% ( 28) 00:08:59.731 14537.076 - 14596.655: 83.4354% ( 22) 00:08:59.731 14596.655 - 14656.233: 83.7204% ( 27) 00:08:59.731 14656.233 - 14715.811: 83.9210% ( 19) 00:08:59.731 14715.811 - 14775.389: 84.1427% ( 21) 00:08:59.731 14775.389 - 14834.967: 84.3433% ( 19) 00:08:59.731 14834.967 - 14894.545: 84.5017% ( 15) 00:08:59.731 14894.545 - 14954.124: 84.6389% ( 13) 00:08:59.731 14954.124 - 15013.702: 84.7445% ( 10) 00:08:59.731 15013.702 - 15073.280: 84.8184% ( 7) 00:08:59.731 15073.280 - 15132.858: 84.8923% ( 7) 00:08:59.731 15132.858 - 15192.436: 84.9662% ( 7) 00:08:59.731 15192.436 - 15252.015: 85.0718% ( 10) 00:08:59.731 15252.015 - 15371.171: 85.5258% ( 43) 00:08:59.731 15371.171 - 15490.327: 85.8319% ( 29) 00:08:59.731 15490.327 - 15609.484: 86.0853% ( 24) 00:08:59.731 15609.484 - 15728.640: 86.3387% ( 24) 00:08:59.731 15728.640 - 15847.796: 86.5709% ( 22) 00:08:59.731 15847.796 - 15966.953: 86.7927% ( 21) 00:08:59.731 15966.953 - 16086.109: 87.0460% ( 24) 00:08:59.731 16086.109 - 16205.265: 87.2889% ( 23) 00:08:59.731 16205.265 - 16324.422: 87.5211% ( 22) 00:08:59.731 16324.422 - 16443.578: 87.6689% ( 14) 00:08:59.731 16443.578 - 16562.735: 87.7217% ( 5) 00:08:59.731 16562.735 - 16681.891: 87.8484% ( 12) 00:08:59.731 16681.891 - 16801.047: 88.0912% ( 23) 00:08:59.731 16801.047 - 16920.204: 88.3235% ( 22) 00:08:59.731 16920.204 - 17039.360: 88.4924% ( 16) 00:08:59.731 17039.360 - 17158.516: 88.6613% ( 16) 00:08:59.731 17158.516 - 17277.673: 88.8302% ( 16) 00:08:59.731 17277.673 - 17396.829: 89.0097% ( 17) 00:08:59.731 17396.829 - 17515.985: 89.1997% ( 18) 00:08:59.731 17515.985 - 17635.142: 89.4215% ( 21) 00:08:59.731 17635.142 - 17754.298: 89.6432% ( 21) 00:08:59.731 17754.298 - 17873.455: 89.7804% ( 13) 00:08:59.731 17873.455 - 17992.611: 89.9599% ( 17) 00:08:59.731 17992.611 - 18111.767: 90.1499% ( 18) 00:08:59.731 18111.767 - 18230.924: 90.4139% ( 25) 00:08:59.731 18230.924 - 18350.080: 90.8361% ( 40) 00:08:59.731 18350.080 - 18469.236: 91.2584% ( 40) 00:08:59.731 18469.236 - 18588.393: 91.6596% ( 38) 00:08:59.731 18588.393 - 18707.549: 92.0291% ( 35) 00:08:59.731 18707.549 - 18826.705: 92.3247% ( 28) 00:08:59.731 18826.705 - 18945.862: 92.6309% ( 29) 00:08:59.731 18945.862 - 19065.018: 92.9371% ( 29) 00:08:59.731 19065.018 - 19184.175: 93.2221% ( 27) 00:08:59.731 19184.175 - 19303.331: 93.5389% ( 30) 00:08:59.731 19303.331 - 19422.487: 93.8133% ( 26) 00:08:59.731 19422.487 - 19541.644: 94.0878% ( 26) 00:08:59.731 19541.644 - 19660.800: 94.3729% ( 27) 00:08:59.731 19660.800 - 19779.956: 94.6263% ( 24) 00:08:59.731 19779.956 - 19899.113: 94.9008% ( 26) 00:08:59.731 19899.113 - 20018.269: 95.1647% ( 25) 00:08:59.731 20018.269 - 20137.425: 95.3653% ( 19) 00:08:59.731 20137.425 - 20256.582: 95.5976% ( 22) 00:08:59.731 20256.582 - 20375.738: 95.7981% ( 19) 00:08:59.731 20375.738 - 20494.895: 96.0515% ( 24) 00:08:59.731 20494.895 - 20614.051: 96.3366% ( 27) 00:08:59.731 20614.051 - 20733.207: 96.5688% ( 22) 00:08:59.731 20733.207 - 20852.364: 96.7694% ( 19) 00:08:59.731 20852.364 - 20971.520: 96.8644% ( 9) 00:08:59.731 20971.520 - 21090.676: 96.8961% ( 3) 00:08:59.731 21090.676 - 21209.833: 96.9172% ( 2) 00:08:59.731 21209.833 - 21328.989: 96.9383% ( 2) 00:08:59.731 21328.989 - 21448.145: 96.9595% ( 2) 00:08:59.731 21448.145 - 21567.302: 96.9806% ( 2) 00:08:59.731 21567.302 - 21686.458: 96.9911% ( 1) 00:08:59.731 21686.458 - 21805.615: 97.0228% ( 3) 00:08:59.731 21805.615 - 21924.771: 97.0334% ( 1) 00:08:59.731 21924.771 - 22043.927: 97.0545% ( 2) 00:08:59.731 22043.927 - 22163.084: 97.0861% ( 3) 00:08:59.731 22163.084 - 22282.240: 97.1073% ( 2) 00:08:59.731 22282.240 - 22401.396: 97.1284% ( 2) 00:08:59.731 22401.396 - 22520.553: 97.1601% ( 3) 00:08:59.731 22520.553 - 22639.709: 97.1917% ( 3) 00:08:59.731 22639.709 - 22758.865: 97.2128% ( 2) 00:08:59.731 22758.865 - 22878.022: 97.2445% ( 3) 00:08:59.731 22878.022 - 22997.178: 97.2656% ( 2) 00:08:59.731 22997.178 - 23116.335: 97.2867% ( 2) 00:08:59.731 23116.335 - 23235.491: 97.2973% ( 1) 00:08:59.731 23235.491 - 23354.647: 97.3923% ( 9) 00:08:59.731 23354.647 - 23473.804: 97.6562% ( 25) 00:08:59.731 23473.804 - 23592.960: 97.7407% ( 8) 00:08:59.731 23592.960 - 23712.116: 97.8357% ( 9) 00:08:59.731 23712.116 - 23831.273: 97.9202% ( 8) 00:08:59.731 23831.273 - 23950.429: 98.0258% ( 10) 00:08:59.731 23950.429 - 24069.585: 98.1419% ( 11) 00:08:59.731 24069.585 - 24188.742: 98.2369% ( 9) 00:08:59.731 24188.742 - 24307.898: 98.3319% ( 9) 00:08:59.731 24307.898 - 24427.055: 98.4375% ( 10) 00:08:59.731 24427.055 - 24546.211: 98.4692% ( 3) 00:08:59.731 24546.211 - 24665.367: 98.5008% ( 3) 00:08:59.731 24665.367 - 24784.524: 98.5431% ( 4) 00:08:59.731 24784.524 - 24903.680: 98.5853% ( 4) 00:08:59.731 24903.680 - 25022.836: 98.6170% ( 3) 00:08:59.731 25022.836 - 25141.993: 98.6486% ( 3) 00:08:59.731 30146.560 - 30265.716: 98.6592% ( 1) 00:08:59.731 30265.716 - 30384.873: 98.6803% ( 2) 00:08:59.731 30384.873 - 30504.029: 98.7226% ( 4) 00:08:59.731 30504.029 - 30742.342: 98.7965% ( 7) 00:08:59.731 30742.342 - 30980.655: 98.8704% ( 7) 00:08:59.731 30980.655 - 31218.967: 98.9443% ( 7) 00:08:59.731 31218.967 - 31457.280: 99.0182% ( 7) 00:08:59.731 31457.280 - 31695.593: 99.0921% ( 7) 00:08:59.731 31695.593 - 31933.905: 99.1765% ( 8) 00:08:59.731 31933.905 - 32172.218: 99.2610% ( 8) 00:08:59.731 32172.218 - 32410.531: 99.3243% ( 6) 00:08:59.731 38368.349 - 38606.662: 99.3771% ( 5) 00:08:59.731 38606.662 - 38844.975: 99.4616% ( 8) 00:08:59.731 38844.975 - 39083.287: 99.5249% ( 6) 00:08:59.731 39083.287 - 39321.600: 99.5883% ( 6) 00:08:59.731 39321.600 - 39559.913: 99.6622% ( 7) 00:08:59.731 39559.913 - 39798.225: 99.7361% ( 7) 00:08:59.731 39798.225 - 40036.538: 99.8100% ( 7) 00:08:59.731 40036.538 - 40274.851: 99.8839% ( 7) 00:08:59.731 40274.851 - 40513.164: 99.9683% ( 8) 00:08:59.731 40513.164 - 40751.476: 100.0000% ( 3) 00:08:59.731 00:08:59.731 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:59.731 ============================================================================== 00:08:59.731 Range in us Cumulative IO count 00:08:59.731 9651.665 - 9711.244: 0.0211% ( 2) 00:08:59.731 9711.244 - 9770.822: 0.0739% ( 5) 00:08:59.731 9770.822 - 9830.400: 0.1267% ( 5) 00:08:59.731 9830.400 - 9889.978: 0.2006% ( 7) 00:08:59.731 9889.978 - 9949.556: 0.2745% ( 7) 00:08:59.731 9949.556 - 10009.135: 0.3695% ( 9) 00:08:59.731 10009.135 - 10068.713: 0.4540% ( 8) 00:08:59.731 10068.713 - 10128.291: 0.5068% ( 5) 00:08:59.731 10128.291 - 10187.869: 0.5701% ( 6) 00:08:59.731 10187.869 - 10247.447: 0.6334% ( 6) 00:08:59.731 10247.447 - 10307.025: 0.7179% ( 8) 00:08:59.731 10307.025 - 10366.604: 0.8868% ( 16) 00:08:59.731 10366.604 - 10426.182: 1.0241% ( 13) 00:08:59.731 10426.182 - 10485.760: 1.1824% ( 15) 00:08:59.731 10485.760 - 10545.338: 1.4041% ( 21) 00:08:59.731 10545.338 - 10604.916: 1.6892% ( 27) 00:08:59.731 10604.916 - 10664.495: 2.0798% ( 37) 00:08:59.731 10664.495 - 10724.073: 2.4810% ( 38) 00:08:59.731 10724.073 - 10783.651: 2.9455% ( 44) 00:08:59.731 10783.651 - 10843.229: 3.2834% ( 32) 00:08:59.731 10843.229 - 10902.807: 3.6106% ( 31) 00:08:59.731 10902.807 - 10962.385: 4.0118% ( 38) 00:08:59.731 10962.385 - 11021.964: 4.5608% ( 52) 00:08:59.731 11021.964 - 11081.542: 4.9726% ( 39) 00:08:59.731 11081.542 - 11141.120: 5.4476% ( 45) 00:08:59.731 11141.120 - 11200.698: 6.2500% ( 76) 00:08:59.731 11200.698 - 11260.276: 7.3269% ( 102) 00:08:59.731 11260.276 - 11319.855: 8.5410% ( 115) 00:08:59.731 11319.855 - 11379.433: 9.9979% ( 138) 00:08:59.731 11379.433 - 11439.011: 11.5921% ( 151) 00:08:59.731 11439.011 - 11498.589: 13.3552% ( 167) 00:08:59.731 11498.589 - 11558.167: 15.4772% ( 201) 00:08:59.731 11558.167 - 11617.745: 17.5148% ( 193) 00:08:59.731 11617.745 - 11677.324: 19.6368% ( 201) 00:08:59.731 11677.324 - 11736.902: 21.9383% ( 218) 00:08:59.731 11736.902 - 11796.480: 24.3032% ( 224) 00:08:59.731 11796.480 - 11856.058: 26.7209% ( 229) 00:08:59.731 11856.058 - 11915.636: 29.1702% ( 232) 00:08:59.731 11915.636 - 11975.215: 31.5878% ( 229) 00:08:59.731 11975.215 - 12034.793: 33.8471% ( 214) 00:08:59.731 12034.793 - 12094.371: 36.0008% ( 204) 00:08:59.731 12094.371 - 12153.949: 38.3129% ( 219) 00:08:59.731 12153.949 - 12213.527: 40.5405% ( 211) 00:08:59.731 12213.527 - 12273.105: 42.7470% ( 209) 00:08:59.731 12273.105 - 12332.684: 45.1119% ( 224) 00:08:59.731 12332.684 - 12392.262: 47.4768% ( 224) 00:08:59.731 12392.262 - 12451.840: 49.7044% ( 211) 00:08:59.731 12451.840 - 12511.418: 51.6997% ( 189) 00:08:59.731 12511.418 - 12570.996: 53.3150% ( 153) 00:08:59.731 12570.996 - 12630.575: 54.8036% ( 141) 00:08:59.731 12630.575 - 12690.153: 56.1339% ( 126) 00:08:59.731 12690.153 - 12749.731: 57.4113% ( 121) 00:08:59.731 12749.731 - 12809.309: 58.7099% ( 123) 00:08:59.731 12809.309 - 12868.887: 59.8818% ( 111) 00:08:59.731 12868.887 - 12928.465: 60.9269% ( 99) 00:08:59.731 12928.465 - 12988.044: 61.9932% ( 101) 00:08:59.731 12988.044 - 13047.622: 62.9645% ( 92) 00:08:59.731 13047.622 - 13107.200: 64.0414% ( 102) 00:08:59.731 13107.200 - 13166.778: 65.2344% ( 113) 00:08:59.731 13166.778 - 13226.356: 66.4168% ( 112) 00:08:59.732 13226.356 - 13285.935: 67.4620% ( 99) 00:08:59.732 13285.935 - 13345.513: 68.3699% ( 86) 00:08:59.732 13345.513 - 13405.091: 69.2462% ( 83) 00:08:59.732 13405.091 - 13464.669: 70.1541% ( 86) 00:08:59.732 13464.669 - 13524.247: 71.1465% ( 94) 00:08:59.732 13524.247 - 13583.825: 72.0545% ( 86) 00:08:59.732 13583.825 - 13643.404: 72.9835% ( 88) 00:08:59.732 13643.404 - 13702.982: 73.9443% ( 91) 00:08:59.732 13702.982 - 13762.560: 74.8733% ( 88) 00:08:59.732 13762.560 - 13822.138: 75.7601% ( 84) 00:08:59.732 13822.138 - 13881.716: 76.6364% ( 83) 00:08:59.732 13881.716 - 13941.295: 77.4071% ( 73) 00:08:59.732 13941.295 - 14000.873: 78.2095% ( 76) 00:08:59.732 14000.873 - 14060.451: 78.8640% ( 62) 00:08:59.732 14060.451 - 14120.029: 79.6136% ( 71) 00:08:59.732 14120.029 - 14179.607: 80.2259% ( 58) 00:08:59.732 14179.607 - 14239.185: 80.7432% ( 49) 00:08:59.732 14239.185 - 14298.764: 81.2394% ( 47) 00:08:59.732 14298.764 - 14358.342: 81.7145% ( 45) 00:08:59.732 14358.342 - 14417.920: 82.1368% ( 40) 00:08:59.732 14417.920 - 14477.498: 82.5169% ( 36) 00:08:59.732 14477.498 - 14537.076: 82.8653% ( 33) 00:08:59.732 14537.076 - 14596.655: 83.1609% ( 28) 00:08:59.732 14596.655 - 14656.233: 83.4776% ( 30) 00:08:59.732 14656.233 - 14715.811: 83.6782% ( 19) 00:08:59.732 14715.811 - 14775.389: 83.8788% ( 19) 00:08:59.732 14775.389 - 14834.967: 84.0583% ( 17) 00:08:59.732 14834.967 - 14894.545: 84.2061% ( 14) 00:08:59.732 14894.545 - 14954.124: 84.3433% ( 13) 00:08:59.732 14954.124 - 15013.702: 84.4806% ( 13) 00:08:59.732 15013.702 - 15073.280: 84.5967% ( 11) 00:08:59.732 15073.280 - 15132.858: 84.6917% ( 9) 00:08:59.732 15132.858 - 15192.436: 84.7867% ( 9) 00:08:59.732 15192.436 - 15252.015: 84.9345% ( 14) 00:08:59.732 15252.015 - 15371.171: 85.1562% ( 21) 00:08:59.732 15371.171 - 15490.327: 85.3463% ( 18) 00:08:59.732 15490.327 - 15609.484: 85.5258% ( 17) 00:08:59.732 15609.484 - 15728.640: 85.7158% ( 18) 00:08:59.732 15728.640 - 15847.796: 85.8742% ( 15) 00:08:59.732 15847.796 - 15966.953: 86.0536% ( 17) 00:08:59.732 15966.953 - 16086.109: 86.1909% ( 13) 00:08:59.732 16086.109 - 16205.265: 86.3387% ( 14) 00:08:59.732 16205.265 - 16324.422: 86.4654% ( 12) 00:08:59.732 16324.422 - 16443.578: 86.7504% ( 27) 00:08:59.732 16443.578 - 16562.735: 86.9616% ( 20) 00:08:59.732 16562.735 - 16681.891: 87.1622% ( 19) 00:08:59.732 16681.891 - 16801.047: 87.4789% ( 30) 00:08:59.732 16801.047 - 16920.204: 87.7534% ( 26) 00:08:59.732 16920.204 - 17039.360: 88.0701% ( 30) 00:08:59.732 17039.360 - 17158.516: 88.3763% ( 29) 00:08:59.732 17158.516 - 17277.673: 88.6930% ( 30) 00:08:59.732 17277.673 - 17396.829: 88.9886% ( 28) 00:08:59.732 17396.829 - 17515.985: 89.2525% ( 25) 00:08:59.732 17515.985 - 17635.142: 89.4848% ( 22) 00:08:59.732 17635.142 - 17754.298: 89.6748% ( 18) 00:08:59.732 17754.298 - 17873.455: 89.8438% ( 16) 00:08:59.732 17873.455 - 17992.611: 90.0127% ( 16) 00:08:59.732 17992.611 - 18111.767: 90.1499% ( 13) 00:08:59.732 18111.767 - 18230.924: 90.4350% ( 27) 00:08:59.732 18230.924 - 18350.080: 90.6883% ( 24) 00:08:59.732 18350.080 - 18469.236: 91.0684% ( 36) 00:08:59.732 18469.236 - 18588.393: 91.5329% ( 44) 00:08:59.732 18588.393 - 18707.549: 91.9024% ( 35) 00:08:59.732 18707.549 - 18826.705: 92.3036% ( 38) 00:08:59.732 18826.705 - 18945.862: 92.6626% ( 34) 00:08:59.732 18945.862 - 19065.018: 93.0849% ( 40) 00:08:59.732 19065.018 - 19184.175: 93.4544% ( 35) 00:08:59.732 19184.175 - 19303.331: 93.8450% ( 37) 00:08:59.732 19303.331 - 19422.487: 94.2462% ( 38) 00:08:59.732 19422.487 - 19541.644: 94.6157% ( 35) 00:08:59.732 19541.644 - 19660.800: 95.0063% ( 37) 00:08:59.732 19660.800 - 19779.956: 95.3336% ( 31) 00:08:59.732 19779.956 - 19899.113: 95.6081% ( 26) 00:08:59.732 19899.113 - 20018.269: 95.8826% ( 26) 00:08:59.732 20018.269 - 20137.425: 96.1254% ( 23) 00:08:59.732 20137.425 - 20256.582: 96.3155% ( 18) 00:08:59.732 20256.582 - 20375.738: 96.5372% ( 21) 00:08:59.732 20375.738 - 20494.895: 96.7483% ( 20) 00:08:59.732 20494.895 - 20614.051: 97.0122% ( 25) 00:08:59.732 20614.051 - 20733.207: 97.2128% ( 19) 00:08:59.732 20733.207 - 20852.364: 97.3923% ( 17) 00:08:59.732 20852.364 - 20971.520: 97.4768% ( 8) 00:08:59.732 20971.520 - 21090.676: 97.5401% ( 6) 00:08:59.732 21090.676 - 21209.833: 97.5718% ( 3) 00:08:59.732 21209.833 - 21328.989: 97.5929% ( 2) 00:08:59.732 21328.989 - 21448.145: 97.6140% ( 2) 00:08:59.732 21448.145 - 21567.302: 97.6351% ( 2) 00:08:59.732 21567.302 - 21686.458: 97.6668% ( 3) 00:08:59.732 21686.458 - 21805.615: 97.6879% ( 2) 00:08:59.732 21805.615 - 21924.771: 97.7090% ( 2) 00:08:59.732 21924.771 - 22043.927: 97.7302% ( 2) 00:08:59.732 22043.927 - 22163.084: 97.7513% ( 2) 00:08:59.732 22163.084 - 22282.240: 97.7829% ( 3) 00:08:59.732 22282.240 - 22401.396: 97.8041% ( 2) 00:08:59.732 22401.396 - 22520.553: 97.8674% ( 6) 00:08:59.732 22520.553 - 22639.709: 97.9413% ( 7) 00:08:59.732 22639.709 - 22758.865: 97.9941% ( 5) 00:08:59.732 22758.865 - 22878.022: 98.0680% ( 7) 00:08:59.732 22878.022 - 22997.178: 98.1208% ( 5) 00:08:59.732 22997.178 - 23116.335: 98.1947% ( 7) 00:08:59.732 23116.335 - 23235.491: 98.2264% ( 3) 00:08:59.732 23235.491 - 23354.647: 98.2686% ( 4) 00:08:59.732 23354.647 - 23473.804: 98.3003% ( 3) 00:08:59.732 23473.804 - 23592.960: 98.3425% ( 4) 00:08:59.732 23592.960 - 23712.116: 98.3742% ( 3) 00:08:59.732 23712.116 - 23831.273: 98.4164% ( 4) 00:08:59.732 23831.273 - 23950.429: 98.4481% ( 3) 00:08:59.732 23950.429 - 24069.585: 98.5008% ( 5) 00:08:59.732 24069.585 - 24188.742: 98.5325% ( 3) 00:08:59.732 24188.742 - 24307.898: 98.5642% ( 3) 00:08:59.732 24307.898 - 24427.055: 98.6064% ( 4) 00:08:59.732 24427.055 - 24546.211: 98.6381% ( 3) 00:08:59.732 24546.211 - 24665.367: 98.6486% ( 1) 00:08:59.732 28716.684 - 28835.840: 98.6698% ( 2) 00:08:59.732 28835.840 - 28954.996: 98.7014% ( 3) 00:08:59.732 28954.996 - 29074.153: 98.7437% ( 4) 00:08:59.732 29074.153 - 29193.309: 98.7859% ( 4) 00:08:59.732 29193.309 - 29312.465: 98.8176% ( 3) 00:08:59.732 29312.465 - 29431.622: 98.8598% ( 4) 00:08:59.732 29431.622 - 29550.778: 98.9020% ( 4) 00:08:59.732 29550.778 - 29669.935: 98.9337% ( 3) 00:08:59.732 29669.935 - 29789.091: 98.9759% ( 4) 00:08:59.732 29789.091 - 29908.247: 99.0076% ( 3) 00:08:59.732 29908.247 - 30027.404: 99.0498% ( 4) 00:08:59.732 30027.404 - 30146.560: 99.0815% ( 3) 00:08:59.732 30146.560 - 30265.716: 99.1237% ( 4) 00:08:59.732 30265.716 - 30384.873: 99.1448% ( 2) 00:08:59.732 30384.873 - 30504.029: 99.1765% ( 3) 00:08:59.732 30504.029 - 30742.342: 99.2610% ( 8) 00:08:59.732 30742.342 - 30980.655: 99.3243% ( 6) 00:08:59.732 36938.473 - 37176.785: 99.3560% ( 3) 00:08:59.732 37176.785 - 37415.098: 99.4405% ( 8) 00:08:59.732 37415.098 - 37653.411: 99.5038% ( 6) 00:08:59.732 37653.411 - 37891.724: 99.5777% ( 7) 00:08:59.732 37891.724 - 38130.036: 99.6516% ( 7) 00:08:59.732 38130.036 - 38368.349: 99.7255% ( 7) 00:08:59.732 38368.349 - 38606.662: 99.7994% ( 7) 00:08:59.732 38606.662 - 38844.975: 99.8839% ( 8) 00:08:59.732 38844.975 - 39083.287: 99.9367% ( 5) 00:08:59.732 39083.287 - 39321.600: 100.0000% ( 6) 00:08:59.732 00:08:59.732 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:59.732 ============================================================================== 00:08:59.732 Range in us Cumulative IO count 00:08:59.732 9532.509 - 9592.087: 0.0211% ( 2) 00:08:59.732 9592.087 - 9651.665: 0.0528% ( 3) 00:08:59.732 9651.665 - 9711.244: 0.0845% ( 3) 00:08:59.732 9711.244 - 9770.822: 0.1267% ( 4) 00:08:59.732 9770.822 - 9830.400: 0.1584% ( 3) 00:08:59.732 9830.400 - 9889.978: 0.2006% ( 4) 00:08:59.732 9889.978 - 9949.556: 0.2323% ( 3) 00:08:59.732 9949.556 - 10009.135: 0.2745% ( 4) 00:08:59.732 10009.135 - 10068.713: 0.3695% ( 9) 00:08:59.732 10068.713 - 10128.291: 0.4540% ( 8) 00:08:59.732 10128.291 - 10187.869: 0.5912% ( 13) 00:08:59.732 10187.869 - 10247.447: 0.7073% ( 11) 00:08:59.732 10247.447 - 10307.025: 0.8340% ( 12) 00:08:59.732 10307.025 - 10366.604: 1.0030% ( 16) 00:08:59.732 10366.604 - 10426.182: 1.2141% ( 20) 00:08:59.732 10426.182 - 10485.760: 1.4464% ( 22) 00:08:59.732 10485.760 - 10545.338: 1.6364% ( 18) 00:08:59.732 10545.338 - 10604.916: 1.9003% ( 25) 00:08:59.732 10604.916 - 10664.495: 2.1959% ( 28) 00:08:59.732 10664.495 - 10724.073: 2.4810% ( 27) 00:08:59.732 10724.073 - 10783.651: 2.9244% ( 42) 00:08:59.732 10783.651 - 10843.229: 3.3150% ( 37) 00:08:59.732 10843.229 - 10902.807: 3.7479% ( 41) 00:08:59.732 10902.807 - 10962.385: 4.1702% ( 40) 00:08:59.732 10962.385 - 11021.964: 4.6875% ( 49) 00:08:59.732 11021.964 - 11081.542: 5.1309% ( 42) 00:08:59.732 11081.542 - 11141.120: 5.5638% ( 41) 00:08:59.732 11141.120 - 11200.698: 6.1867% ( 59) 00:08:59.732 11200.698 - 11260.276: 7.1791% ( 94) 00:08:59.732 11260.276 - 11319.855: 8.4459% ( 120) 00:08:59.732 11319.855 - 11379.433: 10.1140% ( 158) 00:08:59.732 11379.433 - 11439.011: 11.6554% ( 146) 00:08:59.732 11439.011 - 11498.589: 13.3340% ( 159) 00:08:59.732 11498.589 - 11558.167: 15.2344% ( 180) 00:08:59.732 11558.167 - 11617.745: 17.3881% ( 204) 00:08:59.732 11617.745 - 11677.324: 19.5840% ( 208) 00:08:59.732 11677.324 - 11736.902: 21.8856% ( 218) 00:08:59.732 11736.902 - 11796.480: 24.1026% ( 210) 00:08:59.732 11796.480 - 11856.058: 26.5308% ( 230) 00:08:59.732 11856.058 - 11915.636: 28.9274% ( 227) 00:08:59.732 11915.636 - 11975.215: 31.3028% ( 225) 00:08:59.732 11975.215 - 12034.793: 33.7310% ( 230) 00:08:59.732 12034.793 - 12094.371: 36.0431% ( 219) 00:08:59.732 12094.371 - 12153.949: 38.2812% ( 212) 00:08:59.732 12153.949 - 12213.527: 40.4350% ( 204) 00:08:59.732 12213.527 - 12273.105: 42.4831% ( 194) 00:08:59.732 12273.105 - 12332.684: 44.6791% ( 208) 00:08:59.732 12332.684 - 12392.262: 46.8961% ( 210) 00:08:59.732 12392.262 - 12451.840: 48.9337% ( 193) 00:08:59.732 12451.840 - 12511.418: 50.9291% ( 189) 00:08:59.732 12511.418 - 12570.996: 52.5443% ( 153) 00:08:59.732 12570.996 - 12630.575: 54.0435% ( 142) 00:08:59.732 12630.575 - 12690.153: 55.3843% ( 127) 00:08:59.732 12690.153 - 12749.731: 56.5878% ( 114) 00:08:59.732 12749.731 - 12809.309: 57.7703% ( 112) 00:08:59.733 12809.309 - 12868.887: 58.8894% ( 106) 00:08:59.733 12868.887 - 12928.465: 59.9029% ( 96) 00:08:59.733 12928.465 - 12988.044: 61.0325% ( 107) 00:08:59.733 12988.044 - 13047.622: 61.9721% ( 89) 00:08:59.733 13047.622 - 13107.200: 63.0490% ( 102) 00:08:59.733 13107.200 - 13166.778: 64.1153% ( 101) 00:08:59.733 13166.778 - 13226.356: 65.2977% ( 112) 00:08:59.733 13226.356 - 13285.935: 66.4379% ( 108) 00:08:59.733 13285.935 - 13345.513: 67.5148% ( 102) 00:08:59.733 13345.513 - 13405.091: 68.5389% ( 97) 00:08:59.733 13405.091 - 13464.669: 69.5312% ( 94) 00:08:59.733 13464.669 - 13524.247: 70.4920% ( 91) 00:08:59.733 13524.247 - 13583.825: 71.5266% ( 98) 00:08:59.733 13583.825 - 13643.404: 72.5190% ( 94) 00:08:59.733 13643.404 - 13702.982: 73.3953% ( 83) 00:08:59.733 13702.982 - 13762.560: 74.2610% ( 82) 00:08:59.733 13762.560 - 13822.138: 75.1161% ( 81) 00:08:59.733 13822.138 - 13881.716: 75.9185% ( 76) 00:08:59.733 13881.716 - 13941.295: 76.6470% ( 69) 00:08:59.733 13941.295 - 14000.873: 77.3860% ( 70) 00:08:59.733 14000.873 - 14060.451: 78.0933% ( 67) 00:08:59.733 14060.451 - 14120.029: 78.8218% ( 69) 00:08:59.733 14120.029 - 14179.607: 79.5397% ( 68) 00:08:59.733 14179.607 - 14239.185: 80.1731% ( 60) 00:08:59.733 14239.185 - 14298.764: 80.6905% ( 49) 00:08:59.733 14298.764 - 14358.342: 81.1339% ( 42) 00:08:59.733 14358.342 - 14417.920: 81.5456% ( 39) 00:08:59.733 14417.920 - 14477.498: 81.8623% ( 30) 00:08:59.733 14477.498 - 14537.076: 82.1791% ( 30) 00:08:59.733 14537.076 - 14596.655: 82.4958% ( 30) 00:08:59.733 14596.655 - 14656.233: 82.8125% ( 30) 00:08:59.733 14656.233 - 14715.811: 83.1187% ( 29) 00:08:59.733 14715.811 - 14775.389: 83.4671% ( 33) 00:08:59.733 14775.389 - 14834.967: 83.7204% ( 24) 00:08:59.733 14834.967 - 14894.545: 83.9316% ( 20) 00:08:59.733 14894.545 - 14954.124: 84.1533% ( 21) 00:08:59.733 14954.124 - 15013.702: 84.3222% ( 16) 00:08:59.733 15013.702 - 15073.280: 84.5122% ( 18) 00:08:59.733 15073.280 - 15132.858: 84.6812% ( 16) 00:08:59.733 15132.858 - 15192.436: 84.8079% ( 12) 00:08:59.733 15192.436 - 15252.015: 84.9134% ( 10) 00:08:59.733 15252.015 - 15371.171: 85.2829% ( 35) 00:08:59.733 15371.171 - 15490.327: 85.5363% ( 24) 00:08:59.733 15490.327 - 15609.484: 85.7897% ( 24) 00:08:59.733 15609.484 - 15728.640: 86.0853% ( 28) 00:08:59.733 15728.640 - 15847.796: 86.4231% ( 32) 00:08:59.733 15847.796 - 15966.953: 86.6343% ( 20) 00:08:59.733 15966.953 - 16086.109: 86.8560% ( 21) 00:08:59.733 16086.109 - 16205.265: 87.0671% ( 20) 00:08:59.733 16205.265 - 16324.422: 87.2677% ( 19) 00:08:59.733 16324.422 - 16443.578: 87.4155% ( 14) 00:08:59.733 16443.578 - 16562.735: 87.5000% ( 8) 00:08:59.733 16562.735 - 16681.891: 87.7745% ( 26) 00:08:59.733 16681.891 - 16801.047: 88.0701% ( 28) 00:08:59.733 16801.047 - 16920.204: 88.2707% ( 19) 00:08:59.733 16920.204 - 17039.360: 88.5135% ( 23) 00:08:59.733 17039.360 - 17158.516: 88.7458% ( 22) 00:08:59.733 17158.516 - 17277.673: 89.0097% ( 25) 00:08:59.733 17277.673 - 17396.829: 89.2631% ( 24) 00:08:59.733 17396.829 - 17515.985: 89.4954% ( 22) 00:08:59.733 17515.985 - 17635.142: 89.7382% ( 23) 00:08:59.733 17635.142 - 17754.298: 89.8860% ( 14) 00:08:59.733 17754.298 - 17873.455: 89.9282% ( 4) 00:08:59.733 17873.455 - 17992.611: 89.9493% ( 2) 00:08:59.733 17992.611 - 18111.767: 89.9810% ( 3) 00:08:59.733 18111.767 - 18230.924: 90.2133% ( 22) 00:08:59.733 18230.924 - 18350.080: 90.5300% ( 30) 00:08:59.733 18350.080 - 18469.236: 90.9945% ( 44) 00:08:59.733 18469.236 - 18588.393: 91.3746% ( 36) 00:08:59.733 18588.393 - 18707.549: 91.7230% ( 33) 00:08:59.733 18707.549 - 18826.705: 92.1664% ( 42) 00:08:59.733 18826.705 - 18945.862: 92.4831% ( 30) 00:08:59.733 18945.862 - 19065.018: 92.8209% ( 32) 00:08:59.733 19065.018 - 19184.175: 93.2538% ( 41) 00:08:59.733 19184.175 - 19303.331: 93.6128% ( 34) 00:08:59.733 19303.331 - 19422.487: 93.9611% ( 33) 00:08:59.733 19422.487 - 19541.644: 94.3307% ( 35) 00:08:59.733 19541.644 - 19660.800: 94.6896% ( 34) 00:08:59.733 19660.800 - 19779.956: 95.0697% ( 36) 00:08:59.733 19779.956 - 19899.113: 95.3970% ( 31) 00:08:59.733 19899.113 - 20018.269: 95.7137% ( 30) 00:08:59.733 20018.269 - 20137.425: 96.0304% ( 30) 00:08:59.733 20137.425 - 20256.582: 96.2943% ( 25) 00:08:59.733 20256.582 - 20375.738: 96.4844% ( 18) 00:08:59.733 20375.738 - 20494.895: 96.7166% ( 22) 00:08:59.733 20494.895 - 20614.051: 96.8856% ( 16) 00:08:59.733 20614.051 - 20733.207: 97.1178% ( 22) 00:08:59.733 20733.207 - 20852.364: 97.3079% ( 18) 00:08:59.733 20852.364 - 20971.520: 97.4134% ( 10) 00:08:59.733 20971.520 - 21090.676: 97.4873% ( 7) 00:08:59.733 21090.676 - 21209.833: 97.5507% ( 6) 00:08:59.733 21209.833 - 21328.989: 97.6140% ( 6) 00:08:59.733 21328.989 - 21448.145: 97.6351% ( 2) 00:08:59.733 21448.145 - 21567.302: 97.6562% ( 2) 00:08:59.733 21567.302 - 21686.458: 97.6774% ( 2) 00:08:59.733 21686.458 - 21805.615: 97.7513% ( 7) 00:08:59.733 21805.615 - 21924.771: 97.8146% ( 6) 00:08:59.733 21924.771 - 22043.927: 97.8780% ( 6) 00:08:59.733 22043.927 - 22163.084: 97.9413% ( 6) 00:08:59.733 22163.084 - 22282.240: 98.0046% ( 6) 00:08:59.733 22282.240 - 22401.396: 98.0574% ( 5) 00:08:59.733 22401.396 - 22520.553: 98.1102% ( 5) 00:08:59.733 22520.553 - 22639.709: 98.1736% ( 6) 00:08:59.733 22639.709 - 22758.865: 98.2369% ( 6) 00:08:59.733 22758.865 - 22878.022: 98.3003% ( 6) 00:08:59.733 22878.022 - 22997.178: 98.3636% ( 6) 00:08:59.733 22997.178 - 23116.335: 98.4269% ( 6) 00:08:59.733 23116.335 - 23235.491: 98.4586% ( 3) 00:08:59.733 23235.491 - 23354.647: 98.4903% ( 3) 00:08:59.733 23354.647 - 23473.804: 98.5220% ( 3) 00:08:59.733 23473.804 - 23592.960: 98.5536% ( 3) 00:08:59.733 23592.960 - 23712.116: 98.5959% ( 4) 00:08:59.733 23712.116 - 23831.273: 98.6275% ( 3) 00:08:59.733 23831.273 - 23950.429: 98.6486% ( 2) 00:08:59.733 26810.182 - 26929.338: 98.6803% ( 3) 00:08:59.733 26929.338 - 27048.495: 98.7226% ( 4) 00:08:59.733 27048.495 - 27167.651: 98.7648% ( 4) 00:08:59.733 27167.651 - 27286.807: 98.8070% ( 4) 00:08:59.733 27286.807 - 27405.964: 98.8492% ( 4) 00:08:59.733 27405.964 - 27525.120: 98.8809% ( 3) 00:08:59.733 27525.120 - 27644.276: 98.9231% ( 4) 00:08:59.733 27644.276 - 27763.433: 98.9654% ( 4) 00:08:59.733 27763.433 - 27882.589: 99.0076% ( 4) 00:08:59.733 27882.589 - 28001.745: 99.0498% ( 4) 00:08:59.733 28001.745 - 28120.902: 99.0921% ( 4) 00:08:59.733 28120.902 - 28240.058: 99.1343% ( 4) 00:08:59.733 28240.058 - 28359.215: 99.1765% ( 4) 00:08:59.733 28359.215 - 28478.371: 99.2082% ( 3) 00:08:59.733 28478.371 - 28597.527: 99.2504% ( 4) 00:08:59.733 28597.527 - 28716.684: 99.2927% ( 4) 00:08:59.733 28716.684 - 28835.840: 99.3243% ( 3) 00:08:59.733 34793.658 - 35031.971: 99.3349% ( 1) 00:08:59.733 35031.971 - 35270.284: 99.4088% ( 7) 00:08:59.733 35270.284 - 35508.596: 99.4932% ( 8) 00:08:59.733 35508.596 - 35746.909: 99.5671% ( 7) 00:08:59.733 35746.909 - 35985.222: 99.6410% ( 7) 00:08:59.733 35985.222 - 36223.535: 99.7149% ( 7) 00:08:59.733 36223.535 - 36461.847: 99.7994% ( 8) 00:08:59.733 36461.847 - 36700.160: 99.8628% ( 6) 00:08:59.733 36700.160 - 36938.473: 99.9367% ( 7) 00:08:59.733 36938.473 - 37176.785: 100.0000% ( 6) 00:08:59.733 00:08:59.733 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:59.733 ============================================================================== 00:08:59.733 Range in us Cumulative IO count 00:08:59.733 9651.665 - 9711.244: 0.0422% ( 4) 00:08:59.733 9711.244 - 9770.822: 0.1056% ( 6) 00:08:59.733 9770.822 - 9830.400: 0.1584% ( 5) 00:08:59.733 9830.400 - 9889.978: 0.1795% ( 2) 00:08:59.733 9889.978 - 9949.556: 0.2111% ( 3) 00:08:59.733 9949.556 - 10009.135: 0.2428% ( 3) 00:08:59.733 10009.135 - 10068.713: 0.3062% ( 6) 00:08:59.733 10068.713 - 10128.291: 0.4751% ( 16) 00:08:59.733 10128.291 - 10187.869: 0.6440% ( 16) 00:08:59.733 10187.869 - 10247.447: 0.8024% ( 15) 00:08:59.733 10247.447 - 10307.025: 0.9924% ( 18) 00:08:59.733 10307.025 - 10366.604: 1.1824% ( 18) 00:08:59.733 10366.604 - 10426.182: 1.3514% ( 16) 00:08:59.733 10426.182 - 10485.760: 1.4780% ( 12) 00:08:59.733 10485.760 - 10545.338: 1.8159% ( 32) 00:08:59.733 10545.338 - 10604.916: 2.0904% ( 26) 00:08:59.733 10604.916 - 10664.495: 2.3649% ( 26) 00:08:59.734 10664.495 - 10724.073: 2.5866% ( 21) 00:08:59.734 10724.073 - 10783.651: 2.7872% ( 19) 00:08:59.734 10783.651 - 10843.229: 3.1883% ( 38) 00:08:59.734 10843.229 - 10902.807: 3.6740% ( 46) 00:08:59.734 10902.807 - 10962.385: 4.2546% ( 55) 00:08:59.734 10962.385 - 11021.964: 4.8247% ( 54) 00:08:59.734 11021.964 - 11081.542: 5.2787% ( 43) 00:08:59.734 11081.542 - 11141.120: 5.8171% ( 51) 00:08:59.734 11141.120 - 11200.698: 6.3978% ( 55) 00:08:59.734 11200.698 - 11260.276: 7.0840% ( 65) 00:08:59.734 11260.276 - 11319.855: 8.3615% ( 121) 00:08:59.734 11319.855 - 11379.433: 9.9768% ( 153) 00:08:59.734 11379.433 - 11439.011: 11.4865% ( 143) 00:08:59.734 11439.011 - 11498.589: 13.3763% ( 179) 00:08:59.734 11498.589 - 11558.167: 15.3716% ( 189) 00:08:59.734 11558.167 - 11617.745: 17.4409% ( 196) 00:08:59.734 11617.745 - 11677.324: 19.6368% ( 208) 00:08:59.734 11677.324 - 11736.902: 21.9383% ( 218) 00:08:59.734 11736.902 - 11796.480: 24.3560% ( 229) 00:08:59.734 11796.480 - 11856.058: 26.9109% ( 242) 00:08:59.734 11856.058 - 11915.636: 29.2546% ( 222) 00:08:59.734 11915.636 - 11975.215: 31.5034% ( 213) 00:08:59.734 11975.215 - 12034.793: 33.7732% ( 215) 00:08:59.734 12034.793 - 12094.371: 35.9797% ( 209) 00:08:59.734 12094.371 - 12153.949: 38.1334% ( 204) 00:08:59.734 12153.949 - 12213.527: 40.2872% ( 204) 00:08:59.734 12213.527 - 12273.105: 42.4831% ( 208) 00:08:59.734 12273.105 - 12332.684: 44.5840% ( 199) 00:08:59.734 12332.684 - 12392.262: 46.6744% ( 198) 00:08:59.734 12392.262 - 12451.840: 48.7542% ( 197) 00:08:59.734 12451.840 - 12511.418: 50.6546% ( 180) 00:08:59.734 12511.418 - 12570.996: 52.2065% ( 147) 00:08:59.734 12570.996 - 12630.575: 53.5579% ( 128) 00:08:59.734 12630.575 - 12690.153: 54.9514% ( 132) 00:08:59.734 12690.153 - 12749.731: 56.2711% ( 125) 00:08:59.734 12749.731 - 12809.309: 57.2952% ( 97) 00:08:59.734 12809.309 - 12868.887: 58.2770% ( 93) 00:08:59.734 12868.887 - 12928.465: 59.2166% ( 89) 00:08:59.734 12928.465 - 12988.044: 60.1246% ( 86) 00:08:59.734 12988.044 - 13047.622: 61.2331% ( 105) 00:08:59.734 13047.622 - 13107.200: 62.2572% ( 97) 00:08:59.734 13107.200 - 13166.778: 63.2390% ( 93) 00:08:59.734 13166.778 - 13226.356: 64.3159% ( 102) 00:08:59.734 13226.356 - 13285.935: 65.4033% ( 103) 00:08:59.734 13285.935 - 13345.513: 66.6068% ( 114) 00:08:59.734 13345.513 - 13405.091: 67.7682% ( 110) 00:08:59.734 13405.091 - 13464.669: 68.9400% ( 111) 00:08:59.734 13464.669 - 13524.247: 70.0697% ( 107) 00:08:59.734 13524.247 - 13583.825: 71.1571% ( 103) 00:08:59.734 13583.825 - 13643.404: 72.1178% ( 91) 00:08:59.734 13643.404 - 13702.982: 73.0363% ( 87) 00:08:59.734 13702.982 - 13762.560: 73.8387% ( 76) 00:08:59.734 13762.560 - 13822.138: 74.7466% ( 86) 00:08:59.734 13822.138 - 13881.716: 75.5807% ( 79) 00:08:59.734 13881.716 - 13941.295: 76.4358% ( 81) 00:08:59.734 13941.295 - 14000.873: 77.2171% ( 74) 00:08:59.734 14000.873 - 14060.451: 77.9350% ( 68) 00:08:59.734 14060.451 - 14120.029: 78.6951% ( 72) 00:08:59.734 14120.029 - 14179.607: 79.4130% ( 68) 00:08:59.734 14179.607 - 14239.185: 80.0676% ( 62) 00:08:59.734 14239.185 - 14298.764: 80.5849% ( 49) 00:08:59.734 14298.764 - 14358.342: 80.9861% ( 38) 00:08:59.734 14358.342 - 14417.920: 81.3556% ( 35) 00:08:59.734 14417.920 - 14477.498: 81.7779% ( 40) 00:08:59.734 14477.498 - 14537.076: 82.0840% ( 29) 00:08:59.734 14537.076 - 14596.655: 82.3902% ( 29) 00:08:59.734 14596.655 - 14656.233: 82.6753% ( 27) 00:08:59.734 14656.233 - 14715.811: 82.9920% ( 30) 00:08:59.734 14715.811 - 14775.389: 83.2876% ( 28) 00:08:59.734 14775.389 - 14834.967: 83.5304% ( 23) 00:08:59.734 14834.967 - 14894.545: 83.7627% ( 22) 00:08:59.734 14894.545 - 14954.124: 84.0266% ( 25) 00:08:59.734 14954.124 - 15013.702: 84.1955% ( 16) 00:08:59.734 15013.702 - 15073.280: 84.3750% ( 17) 00:08:59.734 15073.280 - 15132.858: 84.5228% ( 14) 00:08:59.734 15132.858 - 15192.436: 84.7128% ( 18) 00:08:59.734 15192.436 - 15252.015: 84.8818% ( 16) 00:08:59.734 15252.015 - 15371.171: 85.3463% ( 44) 00:08:59.734 15371.171 - 15490.327: 85.6841% ( 32) 00:08:59.734 15490.327 - 15609.484: 86.0008% ( 30) 00:08:59.734 15609.484 - 15728.640: 86.3176% ( 30) 00:08:59.734 15728.640 - 15847.796: 86.6554% ( 32) 00:08:59.734 15847.796 - 15966.953: 86.8982% ( 23) 00:08:59.734 15966.953 - 16086.109: 87.1410% ( 23) 00:08:59.734 16086.109 - 16205.265: 87.4261% ( 27) 00:08:59.734 16205.265 - 16324.422: 87.6795% ( 24) 00:08:59.734 16324.422 - 16443.578: 87.8801% ( 19) 00:08:59.734 16443.578 - 16562.735: 88.0807% ( 19) 00:08:59.734 16562.735 - 16681.891: 88.3129% ( 22) 00:08:59.734 16681.891 - 16801.047: 88.5241% ( 20) 00:08:59.734 16801.047 - 16920.204: 88.6508% ( 12) 00:08:59.734 16920.204 - 17039.360: 88.8408% ( 18) 00:08:59.734 17039.360 - 17158.516: 88.9675% ( 12) 00:08:59.734 17158.516 - 17277.673: 89.1153% ( 14) 00:08:59.734 17277.673 - 17396.829: 89.2420% ( 12) 00:08:59.734 17396.829 - 17515.985: 89.3581% ( 11) 00:08:59.734 17515.985 - 17635.142: 89.4848% ( 12) 00:08:59.734 17635.142 - 17754.298: 89.6326% ( 14) 00:08:59.734 17754.298 - 17873.455: 89.7698% ( 13) 00:08:59.734 17873.455 - 17992.611: 89.9071% ( 13) 00:08:59.734 17992.611 - 18111.767: 90.0443% ( 13) 00:08:59.734 18111.767 - 18230.924: 90.2660% ( 21) 00:08:59.734 18230.924 - 18350.080: 90.6356% ( 35) 00:08:59.734 18350.080 - 18469.236: 91.1001% ( 44) 00:08:59.734 18469.236 - 18588.393: 91.4379% ( 32) 00:08:59.734 18588.393 - 18707.549: 91.8180% ( 36) 00:08:59.734 18707.549 - 18826.705: 92.1347% ( 30) 00:08:59.734 18826.705 - 18945.862: 92.3881% ( 24) 00:08:59.734 18945.862 - 19065.018: 92.6943% ( 29) 00:08:59.734 19065.018 - 19184.175: 93.0532% ( 34) 00:08:59.734 19184.175 - 19303.331: 93.4122% ( 34) 00:08:59.734 19303.331 - 19422.487: 93.7183% ( 29) 00:08:59.734 19422.487 - 19541.644: 93.9611% ( 23) 00:08:59.734 19541.644 - 19660.800: 94.1723% ( 20) 00:08:59.734 19660.800 - 19779.956: 94.4257% ( 24) 00:08:59.734 19779.956 - 19899.113: 94.6685% ( 23) 00:08:59.734 19899.113 - 20018.269: 94.9113% ( 23) 00:08:59.734 20018.269 - 20137.425: 95.1753% ( 25) 00:08:59.734 20137.425 - 20256.582: 95.5342% ( 34) 00:08:59.734 20256.582 - 20375.738: 95.8298% ( 28) 00:08:59.734 20375.738 - 20494.895: 96.1360% ( 29) 00:08:59.734 20494.895 - 20614.051: 96.4844% ( 33) 00:08:59.734 20614.051 - 20733.207: 96.8433% ( 34) 00:08:59.734 20733.207 - 20852.364: 97.2023% ( 34) 00:08:59.734 20852.364 - 20971.520: 97.4345% ( 22) 00:08:59.734 20971.520 - 21090.676: 97.6351% ( 19) 00:08:59.734 21090.676 - 21209.833: 97.8252% ( 18) 00:08:59.734 21209.833 - 21328.989: 97.9413% ( 11) 00:08:59.734 21328.989 - 21448.145: 98.0046% ( 6) 00:08:59.734 21448.145 - 21567.302: 98.0785% ( 7) 00:08:59.734 21567.302 - 21686.458: 98.1313% ( 5) 00:08:59.734 21686.458 - 21805.615: 98.1841% ( 5) 00:08:59.734 21805.615 - 21924.771: 98.2158% ( 3) 00:08:59.734 21924.771 - 22043.927: 98.2686% ( 5) 00:08:59.734 22043.927 - 22163.084: 98.3108% ( 4) 00:08:59.734 22163.084 - 22282.240: 98.3425% ( 3) 00:08:59.734 22282.240 - 22401.396: 98.3847% ( 4) 00:08:59.734 22401.396 - 22520.553: 98.4164% ( 3) 00:08:59.734 22520.553 - 22639.709: 98.4692% ( 5) 00:08:59.734 22639.709 - 22758.865: 98.5220% ( 5) 00:08:59.734 22758.865 - 22878.022: 98.5642% ( 4) 00:08:59.734 22878.022 - 22997.178: 98.6064% ( 4) 00:08:59.734 22997.178 - 23116.335: 98.6486% ( 4) 00:08:59.734 25022.836 - 25141.993: 98.6698% ( 2) 00:08:59.734 25141.993 - 25261.149: 98.7014% ( 3) 00:08:59.734 25261.149 - 25380.305: 98.7437% ( 4) 00:08:59.734 25380.305 - 25499.462: 98.7753% ( 3) 00:08:59.734 25499.462 - 25618.618: 98.8070% ( 3) 00:08:59.734 25618.618 - 25737.775: 98.8492% ( 4) 00:08:59.734 25737.775 - 25856.931: 98.8809% ( 3) 00:08:59.734 25856.931 - 25976.087: 98.9231% ( 4) 00:08:59.734 25976.087 - 26095.244: 98.9548% ( 3) 00:08:59.734 26095.244 - 26214.400: 98.9970% ( 4) 00:08:59.734 26214.400 - 26333.556: 99.0287% ( 3) 00:08:59.734 26333.556 - 26452.713: 99.0604% ( 3) 00:08:59.734 26452.713 - 26571.869: 99.1026% ( 4) 00:08:59.734 26571.869 - 26691.025: 99.1343% ( 3) 00:08:59.734 26691.025 - 26810.182: 99.1660% ( 3) 00:08:59.734 26810.182 - 26929.338: 99.2082% ( 4) 00:08:59.734 26929.338 - 27048.495: 99.2399% ( 3) 00:08:59.734 27048.495 - 27167.651: 99.2715% ( 3) 00:08:59.734 27167.651 - 27286.807: 99.3138% ( 4) 00:08:59.734 27286.807 - 27405.964: 99.3243% ( 1) 00:08:59.734 32887.156 - 33125.469: 99.3771% ( 5) 00:08:59.734 33125.469 - 33363.782: 99.4510% ( 7) 00:08:59.734 33363.782 - 33602.095: 99.5144% ( 6) 00:08:59.734 33602.095 - 33840.407: 99.5988% ( 8) 00:08:59.734 33840.407 - 34078.720: 99.6727% ( 7) 00:08:59.734 34078.720 - 34317.033: 99.7572% ( 8) 00:08:59.734 34317.033 - 34555.345: 99.8311% ( 7) 00:08:59.734 34555.345 - 34793.658: 99.8944% ( 6) 00:08:59.734 34793.658 - 35031.971: 99.9683% ( 7) 00:08:59.734 35031.971 - 35270.284: 100.0000% ( 3) 00:08:59.734 00:08:59.734 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:59.734 ============================================================================== 00:08:59.734 Range in us Cumulative IO count 00:08:59.734 9711.244 - 9770.822: 0.0422% ( 4) 00:08:59.734 9770.822 - 9830.400: 0.0950% ( 5) 00:08:59.734 9830.400 - 9889.978: 0.1372% ( 4) 00:08:59.734 9889.978 - 9949.556: 0.2217% ( 8) 00:08:59.734 9949.556 - 10009.135: 0.3062% ( 8) 00:08:59.734 10009.135 - 10068.713: 0.4329% ( 12) 00:08:59.734 10068.713 - 10128.291: 0.5595% ( 12) 00:08:59.734 10128.291 - 10187.869: 0.7285% ( 16) 00:08:59.734 10187.869 - 10247.447: 0.8657% ( 13) 00:08:59.734 10247.447 - 10307.025: 1.0452% ( 17) 00:08:59.734 10307.025 - 10366.604: 1.2247% ( 17) 00:08:59.734 10366.604 - 10426.182: 1.3830% ( 15) 00:08:59.734 10426.182 - 10485.760: 1.6258% ( 23) 00:08:59.734 10485.760 - 10545.338: 1.8687% ( 23) 00:08:59.734 10545.338 - 10604.916: 2.1326% ( 25) 00:08:59.734 10604.916 - 10664.495: 2.4071% ( 26) 00:08:59.734 10664.495 - 10724.073: 2.6816% ( 26) 00:08:59.734 10724.073 - 10783.651: 3.0405% ( 34) 00:08:59.734 10783.651 - 10843.229: 3.5895% ( 52) 00:08:59.734 10843.229 - 10902.807: 4.0435% ( 43) 00:08:59.734 10902.807 - 10962.385: 4.4447% ( 38) 00:08:59.734 10962.385 - 11021.964: 4.9620% ( 49) 00:08:59.734 11021.964 - 11081.542: 5.4582% ( 47) 00:08:59.735 11081.542 - 11141.120: 6.0177% ( 53) 00:08:59.735 11141.120 - 11200.698: 6.6090% ( 56) 00:08:59.735 11200.698 - 11260.276: 7.3374% ( 69) 00:08:59.735 11260.276 - 11319.855: 8.4354% ( 104) 00:08:59.735 11319.855 - 11379.433: 9.9345% ( 142) 00:08:59.735 11379.433 - 11439.011: 11.9932% ( 195) 00:08:59.735 11439.011 - 11498.589: 13.9147% ( 182) 00:08:59.735 11498.589 - 11558.167: 15.7939% ( 178) 00:08:59.735 11558.167 - 11617.745: 17.7998% ( 190) 00:08:59.735 11617.745 - 11677.324: 19.9430% ( 203) 00:08:59.735 11677.324 - 11736.902: 22.0861% ( 203) 00:08:59.735 11736.902 - 11796.480: 24.3982% ( 219) 00:08:59.735 11796.480 - 11856.058: 26.6892% ( 217) 00:08:59.735 11856.058 - 11915.636: 29.1068% ( 229) 00:08:59.735 11915.636 - 11975.215: 31.4189% ( 219) 00:08:59.735 11975.215 - 12034.793: 33.7310% ( 219) 00:08:59.735 12034.793 - 12094.371: 36.0008% ( 215) 00:08:59.735 12094.371 - 12153.949: 38.2812% ( 216) 00:08:59.735 12153.949 - 12213.527: 40.4666% ( 207) 00:08:59.735 12213.527 - 12273.105: 42.5570% ( 198) 00:08:59.735 12273.105 - 12332.684: 44.7952% ( 212) 00:08:59.735 12332.684 - 12392.262: 47.0650% ( 215) 00:08:59.735 12392.262 - 12451.840: 49.1026% ( 193) 00:08:59.735 12451.840 - 12511.418: 50.8446% ( 165) 00:08:59.735 12511.418 - 12570.996: 52.2804% ( 136) 00:08:59.735 12570.996 - 12630.575: 53.6106% ( 126) 00:08:59.735 12630.575 - 12690.153: 54.8459% ( 117) 00:08:59.735 12690.153 - 12749.731: 56.0177% ( 111) 00:08:59.735 12749.731 - 12809.309: 57.0629% ( 99) 00:08:59.735 12809.309 - 12868.887: 57.9814% ( 87) 00:08:59.735 12868.887 - 12928.465: 58.9949% ( 96) 00:08:59.735 12928.465 - 12988.044: 60.0401% ( 99) 00:08:59.735 12988.044 - 13047.622: 61.0853% ( 99) 00:08:59.735 13047.622 - 13107.200: 62.0460% ( 91) 00:08:59.735 13107.200 - 13166.778: 63.1229% ( 102) 00:08:59.735 13166.778 - 13226.356: 64.3159% ( 113) 00:08:59.735 13226.356 - 13285.935: 65.5722% ( 119) 00:08:59.735 13285.935 - 13345.513: 66.6702% ( 104) 00:08:59.735 13345.513 - 13405.091: 67.7787% ( 105) 00:08:59.735 13405.091 - 13464.669: 68.8978% ( 106) 00:08:59.735 13464.669 - 13524.247: 69.9430% ( 99) 00:08:59.735 13524.247 - 13583.825: 71.0515% ( 105) 00:08:59.735 13583.825 - 13643.404: 72.2445% ( 113) 00:08:59.735 13643.404 - 13702.982: 73.3214% ( 102) 00:08:59.735 13702.982 - 13762.560: 74.3032% ( 93) 00:08:59.735 13762.560 - 13822.138: 75.3273% ( 97) 00:08:59.735 13822.138 - 13881.716: 76.4253% ( 104) 00:08:59.735 13881.716 - 13941.295: 77.3015% ( 83) 00:08:59.735 13941.295 - 14000.873: 78.1144% ( 77) 00:08:59.735 14000.873 - 14060.451: 78.8851% ( 73) 00:08:59.735 14060.451 - 14120.029: 79.5714% ( 65) 00:08:59.735 14120.029 - 14179.607: 80.2470% ( 64) 00:08:59.735 14179.607 - 14239.185: 80.8277% ( 55) 00:08:59.735 14239.185 - 14298.764: 81.2922% ( 44) 00:08:59.735 14298.764 - 14358.342: 81.6512% ( 34) 00:08:59.735 14358.342 - 14417.920: 82.0101% ( 34) 00:08:59.735 14417.920 - 14477.498: 82.4430% ( 41) 00:08:59.735 14477.498 - 14537.076: 82.8125% ( 35) 00:08:59.735 14537.076 - 14596.655: 83.0764% ( 25) 00:08:59.735 14596.655 - 14656.233: 83.3193% ( 23) 00:08:59.735 14656.233 - 14715.811: 83.6465% ( 31) 00:08:59.735 14715.811 - 14775.389: 83.9105% ( 25) 00:08:59.735 14775.389 - 14834.967: 84.1216% ( 20) 00:08:59.735 14834.967 - 14894.545: 84.3539% ( 22) 00:08:59.735 14894.545 - 14954.124: 84.6073% ( 24) 00:08:59.735 14954.124 - 15013.702: 84.8184% ( 20) 00:08:59.735 15013.702 - 15073.280: 84.9768% ( 15) 00:08:59.735 15073.280 - 15132.858: 85.0929% ( 11) 00:08:59.735 15132.858 - 15192.436: 85.2407% ( 14) 00:08:59.735 15192.436 - 15252.015: 85.3991% ( 15) 00:08:59.735 15252.015 - 15371.171: 85.7580% ( 34) 00:08:59.735 15371.171 - 15490.327: 86.1170% ( 34) 00:08:59.735 15490.327 - 15609.484: 86.4654% ( 33) 00:08:59.735 15609.484 - 15728.640: 86.7715% ( 29) 00:08:59.735 15728.640 - 15847.796: 87.0355% ( 25) 00:08:59.735 15847.796 - 15966.953: 87.2255% ( 18) 00:08:59.735 15966.953 - 16086.109: 87.4472% ( 21) 00:08:59.735 16086.109 - 16205.265: 87.6478% ( 19) 00:08:59.735 16205.265 - 16324.422: 87.7956% ( 14) 00:08:59.735 16324.422 - 16443.578: 87.8906% ( 9) 00:08:59.735 16443.578 - 16562.735: 87.9645% ( 7) 00:08:59.735 16562.735 - 16681.891: 88.0173% ( 5) 00:08:59.735 16681.891 - 16801.047: 88.2073% ( 18) 00:08:59.735 16801.047 - 16920.204: 88.4396% ( 22) 00:08:59.735 16920.204 - 17039.360: 88.6085% ( 16) 00:08:59.735 17039.360 - 17158.516: 88.7669% ( 15) 00:08:59.735 17158.516 - 17277.673: 88.9253% ( 15) 00:08:59.735 17277.673 - 17396.829: 89.0731% ( 14) 00:08:59.735 17396.829 - 17515.985: 89.2209% ( 14) 00:08:59.735 17515.985 - 17635.142: 89.3898% ( 16) 00:08:59.735 17635.142 - 17754.298: 89.5587% ( 16) 00:08:59.735 17754.298 - 17873.455: 89.6643% ( 10) 00:08:59.735 17873.455 - 17992.611: 89.7804% ( 11) 00:08:59.735 17992.611 - 18111.767: 90.0866% ( 29) 00:08:59.735 18111.767 - 18230.924: 90.3399% ( 24) 00:08:59.735 18230.924 - 18350.080: 90.6250% ( 27) 00:08:59.735 18350.080 - 18469.236: 90.9417% ( 30) 00:08:59.735 18469.236 - 18588.393: 91.2373% ( 28) 00:08:59.735 18588.393 - 18707.549: 91.6807% ( 42) 00:08:59.735 18707.549 - 18826.705: 92.0714% ( 37) 00:08:59.735 18826.705 - 18945.862: 92.3986% ( 31) 00:08:59.735 18945.862 - 19065.018: 92.7576% ( 34) 00:08:59.735 19065.018 - 19184.175: 93.0849% ( 31) 00:08:59.735 19184.175 - 19303.331: 93.4016% ( 30) 00:08:59.735 19303.331 - 19422.487: 93.7500% ( 33) 00:08:59.735 19422.487 - 19541.644: 94.0562% ( 29) 00:08:59.735 19541.644 - 19660.800: 94.3412% ( 27) 00:08:59.735 19660.800 - 19779.956: 94.6368% ( 28) 00:08:59.735 19779.956 - 19899.113: 94.8796% ( 23) 00:08:59.735 19899.113 - 20018.269: 95.1330% ( 24) 00:08:59.735 20018.269 - 20137.425: 95.3653% ( 22) 00:08:59.735 20137.425 - 20256.582: 95.5870% ( 21) 00:08:59.735 20256.582 - 20375.738: 95.8509% ( 25) 00:08:59.735 20375.738 - 20494.895: 96.0938% ( 23) 00:08:59.735 20494.895 - 20614.051: 96.3682% ( 26) 00:08:59.735 20614.051 - 20733.207: 96.6322% ( 25) 00:08:59.735 20733.207 - 20852.364: 96.8644% ( 22) 00:08:59.735 20852.364 - 20971.520: 96.9911% ( 12) 00:08:59.735 20971.520 - 21090.676: 97.0861% ( 9) 00:08:59.735 21090.676 - 21209.833: 97.1706% ( 8) 00:08:59.735 21209.833 - 21328.989: 97.2234% ( 5) 00:08:59.735 21328.989 - 21448.145: 97.2867% ( 6) 00:08:59.735 21448.145 - 21567.302: 97.3395% ( 5) 00:08:59.735 21567.302 - 21686.458: 97.3818% ( 4) 00:08:59.735 21686.458 - 21805.615: 97.4345% ( 5) 00:08:59.735 21805.615 - 21924.771: 97.4768% ( 4) 00:08:59.735 21924.771 - 22043.927: 97.5296% ( 5) 00:08:59.735 22043.927 - 22163.084: 97.5823% ( 5) 00:08:59.735 22163.084 - 22282.240: 97.6351% ( 5) 00:08:59.735 22282.240 - 22401.396: 97.6879% ( 5) 00:08:59.735 22401.396 - 22520.553: 97.7407% ( 5) 00:08:59.735 22520.553 - 22639.709: 97.7829% ( 4) 00:08:59.735 22639.709 - 22758.865: 97.8463% ( 6) 00:08:59.735 22758.865 - 22878.022: 97.8885% ( 4) 00:08:59.735 22878.022 - 22997.178: 97.9519% ( 6) 00:08:59.735 22997.178 - 23116.335: 97.9730% ( 2) 00:08:59.735 23116.335 - 23235.491: 98.0574% ( 8) 00:08:59.735 23235.491 - 23354.647: 98.1102% ( 5) 00:08:59.735 23354.647 - 23473.804: 98.1841% ( 7) 00:08:59.735 23473.804 - 23592.960: 98.2475% ( 6) 00:08:59.735 23592.960 - 23712.116: 98.3108% ( 6) 00:08:59.735 23712.116 - 23831.273: 98.3847% ( 7) 00:08:59.735 23831.273 - 23950.429: 98.4481% ( 6) 00:08:59.735 23950.429 - 24069.585: 98.5642% ( 11) 00:08:59.735 24069.585 - 24188.742: 98.6909% ( 12) 00:08:59.735 24188.742 - 24307.898: 98.8387% ( 14) 00:08:59.735 24307.898 - 24427.055: 98.9126% ( 7) 00:08:59.735 24427.055 - 24546.211: 98.9865% ( 7) 00:08:59.735 24546.211 - 24665.367: 99.0076% ( 2) 00:08:59.735 24665.367 - 24784.524: 99.0393% ( 3) 00:08:59.735 24784.524 - 24903.680: 99.0709% ( 3) 00:08:59.735 24903.680 - 25022.836: 99.0921% ( 2) 00:08:59.735 25022.836 - 25141.993: 99.1237% ( 3) 00:08:59.735 25141.993 - 25261.149: 99.1554% ( 3) 00:08:59.735 25261.149 - 25380.305: 99.1871% ( 3) 00:08:59.735 25380.305 - 25499.462: 99.2188% ( 3) 00:08:59.735 25499.462 - 25618.618: 99.2399% ( 2) 00:08:59.735 25618.618 - 25737.775: 99.2715% ( 3) 00:08:59.735 25737.775 - 25856.931: 99.3032% ( 3) 00:08:59.735 25856.931 - 25976.087: 99.3243% ( 2) 00:08:59.735 29908.247 - 30027.404: 99.4088% ( 8) 00:08:59.735 31218.967 - 31457.280: 99.4299% ( 2) 00:08:59.735 31457.280 - 31695.593: 99.5038% ( 7) 00:08:59.735 31695.593 - 31933.905: 99.5777% ( 7) 00:08:59.735 31933.905 - 32172.218: 99.6516% ( 7) 00:08:59.735 32172.218 - 32410.531: 99.7361% ( 8) 00:08:59.735 32410.531 - 32648.844: 99.8205% ( 8) 00:08:59.735 32648.844 - 32887.156: 99.9050% ( 8) 00:08:59.735 32887.156 - 33125.469: 99.9894% ( 8) 00:08:59.735 33125.469 - 33363.782: 100.0000% ( 1) 00:08:59.735 00:08:59.994 07:49:01 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:59.994 00:08:59.994 real 0m2.862s 00:08:59.994 user 0m2.320s 00:08:59.994 sys 0m0.393s 00:08:59.994 07:49:01 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.994 07:49:01 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.994 ************************************ 00:08:59.994 END TEST nvme_perf 00:08:59.994 ************************************ 00:08:59.994 07:49:01 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:59.994 07:49:01 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:59.994 07:49:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.994 07:49:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.994 ************************************ 00:08:59.994 START TEST nvme_hello_world 00:08:59.994 ************************************ 00:08:59.994 07:49:01 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:00.252 Initializing NVMe Controllers 00:09:00.252 Attached to 0000:00:10.0 00:09:00.252 Namespace ID: 1 size: 6GB 00:09:00.252 Attached to 0000:00:11.0 00:09:00.252 Namespace ID: 1 size: 5GB 00:09:00.252 Attached to 0000:00:13.0 00:09:00.252 Namespace ID: 1 size: 1GB 00:09:00.252 Attached to 0000:00:12.0 00:09:00.252 Namespace ID: 1 size: 4GB 00:09:00.252 Namespace ID: 2 size: 4GB 00:09:00.252 Namespace ID: 3 size: 4GB 00:09:00.252 Initialization complete. 00:09:00.252 INFO: using host memory buffer for IO 00:09:00.252 Hello world! 00:09:00.252 INFO: using host memory buffer for IO 00:09:00.252 Hello world! 00:09:00.252 INFO: using host memory buffer for IO 00:09:00.252 Hello world! 00:09:00.252 INFO: using host memory buffer for IO 00:09:00.252 Hello world! 00:09:00.252 INFO: using host memory buffer for IO 00:09:00.252 Hello world! 00:09:00.252 INFO: using host memory buffer for IO 00:09:00.252 Hello world! 00:09:00.252 00:09:00.252 real 0m0.333s 00:09:00.252 user 0m0.110s 00:09:00.252 sys 0m0.175s 00:09:00.253 07:49:02 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.253 07:49:02 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 ************************************ 00:09:00.253 END TEST nvme_hello_world 00:09:00.253 ************************************ 00:09:00.253 07:49:02 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:00.253 07:49:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.253 07:49:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.253 07:49:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 ************************************ 00:09:00.253 START TEST nvme_sgl 00:09:00.253 ************************************ 00:09:00.253 07:49:02 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:00.511 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:00.511 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:00.511 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:00.511 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:00.511 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:00.511 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:00.511 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:00.511 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:00.511 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:00.511 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:00.511 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:00.511 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:00.511 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:00.511 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:00.770 NVMe Readv/Writev Request test 00:09:00.770 Attached to 0000:00:10.0 00:09:00.770 Attached to 0000:00:11.0 00:09:00.770 Attached to 0000:00:13.0 00:09:00.770 Attached to 0000:00:12.0 00:09:00.770 0000:00:10.0: build_io_request_2 test passed 00:09:00.770 0000:00:10.0: build_io_request_4 test passed 00:09:00.770 0000:00:10.0: build_io_request_5 test passed 00:09:00.770 0000:00:10.0: build_io_request_6 test passed 00:09:00.770 0000:00:10.0: build_io_request_7 test passed 00:09:00.770 0000:00:10.0: build_io_request_10 test passed 00:09:00.770 0000:00:11.0: build_io_request_2 test passed 00:09:00.770 0000:00:11.0: build_io_request_4 test passed 00:09:00.770 0000:00:11.0: build_io_request_5 test passed 00:09:00.770 0000:00:11.0: build_io_request_6 test passed 00:09:00.770 0000:00:11.0: build_io_request_7 test passed 00:09:00.770 0000:00:11.0: build_io_request_10 test passed 00:09:00.770 Cleaning up... 00:09:00.770 00:09:00.770 real 0m0.378s 00:09:00.770 user 0m0.192s 00:09:00.770 sys 0m0.144s 00:09:00.770 07:49:02 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.770 07:49:02 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:00.770 ************************************ 00:09:00.770 END TEST nvme_sgl 00:09:00.770 ************************************ 00:09:00.770 07:49:02 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:00.770 07:49:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.770 07:49:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.770 07:49:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.770 ************************************ 00:09:00.770 START TEST nvme_e2edp 00:09:00.770 ************************************ 00:09:00.770 07:49:02 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:01.029 NVMe Write/Read with End-to-End data protection test 00:09:01.029 Attached to 0000:00:10.0 00:09:01.029 Attached to 0000:00:11.0 00:09:01.029 Attached to 0000:00:13.0 00:09:01.029 Attached to 0000:00:12.0 00:09:01.029 Cleaning up... 00:09:01.029 00:09:01.029 real 0m0.310s 00:09:01.029 user 0m0.115s 00:09:01.029 sys 0m0.148s 00:09:01.029 07:49:02 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.029 07:49:02 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:01.029 ************************************ 00:09:01.029 END TEST nvme_e2edp 00:09:01.029 ************************************ 00:09:01.029 07:49:02 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:01.029 07:49:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.029 07:49:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.029 07:49:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.029 ************************************ 00:09:01.029 START TEST nvme_reserve 00:09:01.029 ************************************ 00:09:01.029 07:49:02 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:01.287 ===================================================== 00:09:01.287 NVMe Controller at PCI bus 0, device 16, function 0 00:09:01.287 ===================================================== 00:09:01.287 Reservations: Not Supported 00:09:01.287 ===================================================== 00:09:01.287 NVMe Controller at PCI bus 0, device 17, function 0 00:09:01.287 ===================================================== 00:09:01.287 Reservations: Not Supported 00:09:01.287 ===================================================== 00:09:01.287 NVMe Controller at PCI bus 0, device 19, function 0 00:09:01.287 ===================================================== 00:09:01.287 Reservations: Not Supported 00:09:01.287 ===================================================== 00:09:01.287 NVMe Controller at PCI bus 0, device 18, function 0 00:09:01.287 ===================================================== 00:09:01.287 Reservations: Not Supported 00:09:01.287 Reservation test passed 00:09:01.287 00:09:01.287 real 0m0.324s 00:09:01.287 user 0m0.120s 00:09:01.287 sys 0m0.156s 00:09:01.287 07:49:03 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.287 07:49:03 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:01.287 ************************************ 00:09:01.287 END TEST nvme_reserve 00:09:01.287 ************************************ 00:09:01.545 07:49:03 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:01.545 07:49:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.545 07:49:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.545 07:49:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.545 ************************************ 00:09:01.545 START TEST nvme_err_injection 00:09:01.545 ************************************ 00:09:01.545 07:49:03 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:01.802 NVMe Error Injection test 00:09:01.802 Attached to 0000:00:10.0 00:09:01.802 Attached to 0000:00:11.0 00:09:01.802 Attached to 0000:00:13.0 00:09:01.802 Attached to 0000:00:12.0 00:09:01.802 0000:00:10.0: get features failed as expected 00:09:01.802 0000:00:11.0: get features failed as expected 00:09:01.802 0000:00:13.0: get features failed as expected 00:09:01.802 0000:00:12.0: get features failed as expected 00:09:01.802 0000:00:10.0: get features successfully as expected 00:09:01.802 0000:00:11.0: get features successfully as expected 00:09:01.802 0000:00:13.0: get features successfully as expected 00:09:01.802 0000:00:12.0: get features successfully as expected 00:09:01.802 0000:00:10.0: read failed as expected 00:09:01.802 0000:00:11.0: read failed as expected 00:09:01.802 0000:00:13.0: read failed as expected 00:09:01.802 0000:00:12.0: read failed as expected 00:09:01.803 0000:00:10.0: read successfully as expected 00:09:01.803 0000:00:11.0: read successfully as expected 00:09:01.803 0000:00:13.0: read successfully as expected 00:09:01.803 0000:00:12.0: read successfully as expected 00:09:01.803 Cleaning up... 00:09:01.803 00:09:01.803 real 0m0.341s 00:09:01.803 user 0m0.117s 00:09:01.803 sys 0m0.176s 00:09:01.803 07:49:03 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.803 07:49:03 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 ************************************ 00:09:01.803 END TEST nvme_err_injection 00:09:01.803 ************************************ 00:09:01.803 07:49:03 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:01.803 07:49:03 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:01.803 07:49:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.803 07:49:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 ************************************ 00:09:01.803 START TEST nvme_overhead 00:09:01.803 ************************************ 00:09:01.803 07:49:03 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:03.176 Initializing NVMe Controllers 00:09:03.176 Attached to 0000:00:10.0 00:09:03.176 Attached to 0000:00:11.0 00:09:03.176 Attached to 0000:00:13.0 00:09:03.176 Attached to 0000:00:12.0 00:09:03.176 Initialization complete. Launching workers. 00:09:03.176 submit (in ns) avg, min, max = 16499.0, 13207.3, 233121.4 00:09:03.176 complete (in ns) avg, min, max = 11225.7, 9830.5, 102641.4 00:09:03.176 00:09:03.176 Submit histogram 00:09:03.176 ================ 00:09:03.176 Range in us Cumulative Count 00:09:03.176 13.207 - 13.265: 0.0162% ( 2) 00:09:03.176 13.382 - 13.440: 0.0243% ( 1) 00:09:03.176 14.138 - 14.196: 0.0324% ( 1) 00:09:03.176 14.720 - 14.778: 0.0405% ( 1) 00:09:03.176 14.895 - 15.011: 0.0648% ( 3) 00:09:03.176 15.011 - 15.127: 0.7694% ( 87) 00:09:03.176 15.127 - 15.244: 4.3816% ( 446) 00:09:03.176 15.244 - 15.360: 13.7442% ( 1156) 00:09:03.176 15.360 - 15.476: 29.6266% ( 1961) 00:09:03.176 15.476 - 15.593: 48.2465% ( 2299) 00:09:03.176 15.593 - 15.709: 61.6587% ( 1656) 00:09:03.176 15.709 - 15.825: 69.5230% ( 971) 00:09:03.176 15.825 - 15.942: 74.3662% ( 598) 00:09:03.176 15.942 - 16.058: 77.1442% ( 343) 00:09:03.176 16.058 - 16.175: 79.2581% ( 261) 00:09:03.176 16.175 - 16.291: 80.9832% ( 213) 00:09:03.176 16.291 - 16.407: 82.1495% ( 144) 00:09:03.176 16.407 - 16.524: 83.0971% ( 117) 00:09:03.176 16.524 - 16.640: 83.7774% ( 84) 00:09:03.176 16.640 - 16.756: 84.5064% ( 90) 00:09:03.176 16.756 - 16.873: 85.0490% ( 67) 00:09:03.176 16.873 - 16.989: 85.4702% ( 52) 00:09:03.176 16.989 - 17.105: 85.8427% ( 46) 00:09:03.176 17.105 - 17.222: 86.2396% ( 49) 00:09:03.176 17.222 - 17.338: 86.8227% ( 72) 00:09:03.176 17.338 - 17.455: 87.8027% ( 121) 00:09:03.176 17.455 - 17.571: 88.8718% ( 132) 00:09:03.176 17.571 - 17.687: 89.9409% ( 132) 00:09:03.176 17.687 - 17.804: 90.5726% ( 78) 00:09:03.176 17.804 - 17.920: 90.9452% ( 46) 00:09:03.176 17.920 - 18.036: 91.2043% ( 32) 00:09:03.176 18.036 - 18.153: 91.4149% ( 26) 00:09:03.176 18.153 - 18.269: 91.5121% ( 12) 00:09:03.176 18.269 - 18.385: 91.6012% ( 11) 00:09:03.176 18.385 - 18.502: 91.6741% ( 9) 00:09:03.176 18.502 - 18.618: 91.7308% ( 7) 00:09:03.176 18.618 - 18.735: 91.8037% ( 9) 00:09:03.176 18.735 - 18.851: 91.8361% ( 4) 00:09:03.176 18.851 - 18.967: 91.8847% ( 6) 00:09:03.176 18.967 - 19.084: 91.9819% ( 12) 00:09:03.176 19.084 - 19.200: 92.0143% ( 4) 00:09:03.176 19.200 - 19.316: 92.0305% ( 2) 00:09:03.176 19.316 - 19.433: 92.1195% ( 11) 00:09:03.176 19.433 - 19.549: 92.1600% ( 5) 00:09:03.176 19.549 - 19.665: 92.2086% ( 6) 00:09:03.176 19.665 - 19.782: 92.2653% ( 7) 00:09:03.177 19.782 - 19.898: 92.3301% ( 8) 00:09:03.177 19.898 - 20.015: 92.3868% ( 7) 00:09:03.177 20.015 - 20.131: 92.4516% ( 8) 00:09:03.177 20.131 - 20.247: 92.5083% ( 7) 00:09:03.177 20.247 - 20.364: 92.5650% ( 7) 00:09:03.177 20.364 - 20.480: 92.6541% ( 11) 00:09:03.177 20.480 - 20.596: 92.7675% ( 14) 00:09:03.177 20.596 - 20.713: 92.8809% ( 14) 00:09:03.177 20.713 - 20.829: 92.9862% ( 13) 00:09:03.177 20.829 - 20.945: 93.1157% ( 16) 00:09:03.177 20.945 - 21.062: 93.2048% ( 11) 00:09:03.177 21.062 - 21.178: 93.3101% ( 13) 00:09:03.177 21.178 - 21.295: 93.4397% ( 16) 00:09:03.177 21.295 - 21.411: 93.5126% ( 9) 00:09:03.177 21.411 - 21.527: 93.6098% ( 12) 00:09:03.177 21.527 - 21.644: 93.7556% ( 18) 00:09:03.177 21.644 - 21.760: 93.8690% ( 14) 00:09:03.177 21.760 - 21.876: 93.9985% ( 16) 00:09:03.177 21.876 - 21.993: 94.1443% ( 18) 00:09:03.177 21.993 - 22.109: 94.2496% ( 13) 00:09:03.177 22.109 - 22.225: 94.4359% ( 23) 00:09:03.177 22.225 - 22.342: 94.5574% ( 15) 00:09:03.177 22.342 - 22.458: 94.7194% ( 20) 00:09:03.177 22.458 - 22.575: 94.8004% ( 10) 00:09:03.177 22.575 - 22.691: 94.9542% ( 19) 00:09:03.177 22.691 - 22.807: 95.1162% ( 20) 00:09:03.177 22.807 - 22.924: 95.2539% ( 17) 00:09:03.177 22.924 - 23.040: 95.4159% ( 20) 00:09:03.177 23.040 - 23.156: 95.5374% ( 15) 00:09:03.177 23.156 - 23.273: 95.6670% ( 16) 00:09:03.177 23.273 - 23.389: 95.8127% ( 18) 00:09:03.177 23.389 - 23.505: 95.9099% ( 12) 00:09:03.177 23.505 - 23.622: 96.0152% ( 13) 00:09:03.177 23.622 - 23.738: 96.1448% ( 16) 00:09:03.177 23.738 - 23.855: 96.2501% ( 13) 00:09:03.177 23.855 - 23.971: 96.3554% ( 13) 00:09:03.177 23.971 - 24.087: 96.4769% ( 15) 00:09:03.177 24.087 - 24.204: 96.5660% ( 11) 00:09:03.177 24.204 - 24.320: 96.6470% ( 10) 00:09:03.177 24.320 - 24.436: 96.7441% ( 12) 00:09:03.177 24.436 - 24.553: 96.8656% ( 15) 00:09:03.177 24.553 - 24.669: 97.0114% ( 18) 00:09:03.177 24.669 - 24.785: 97.0762% ( 8) 00:09:03.177 24.785 - 24.902: 97.1572% ( 10) 00:09:03.177 24.902 - 25.018: 97.3030% ( 18) 00:09:03.177 25.018 - 25.135: 97.4164% ( 14) 00:09:03.177 25.135 - 25.251: 97.5379% ( 15) 00:09:03.177 25.251 - 25.367: 97.6108% ( 9) 00:09:03.177 25.367 - 25.484: 97.7322% ( 15) 00:09:03.177 25.484 - 25.600: 97.7970% ( 8) 00:09:03.177 25.600 - 25.716: 97.8780% ( 10) 00:09:03.177 25.716 - 25.833: 97.9590% ( 10) 00:09:03.177 25.833 - 25.949: 98.0157% ( 7) 00:09:03.177 25.949 - 26.065: 98.0886% ( 9) 00:09:03.177 26.065 - 26.182: 98.1777% ( 11) 00:09:03.177 26.182 - 26.298: 98.2911% ( 14) 00:09:03.177 26.298 - 26.415: 98.3883% ( 12) 00:09:03.177 26.415 - 26.531: 98.4531% ( 8) 00:09:03.177 26.531 - 26.647: 98.5179% ( 8) 00:09:03.177 26.647 - 26.764: 98.5908% ( 9) 00:09:03.177 26.764 - 26.880: 98.6474% ( 7) 00:09:03.177 26.880 - 26.996: 98.7284% ( 10) 00:09:03.177 26.996 - 27.113: 98.7770% ( 6) 00:09:03.177 27.113 - 27.229: 98.8661% ( 11) 00:09:03.177 27.229 - 27.345: 98.9147% ( 6) 00:09:03.177 27.345 - 27.462: 98.9228% ( 1) 00:09:03.177 27.462 - 27.578: 98.9957% ( 9) 00:09:03.177 27.578 - 27.695: 99.0362% ( 5) 00:09:03.177 27.695 - 27.811: 99.0605% ( 3) 00:09:03.177 27.811 - 27.927: 99.0929% ( 4) 00:09:03.177 27.927 - 28.044: 99.1172% ( 3) 00:09:03.177 28.044 - 28.160: 99.1577% ( 5) 00:09:03.177 28.160 - 28.276: 99.1658% ( 1) 00:09:03.177 28.276 - 28.393: 99.2063% ( 5) 00:09:03.177 28.393 - 28.509: 99.2387% ( 4) 00:09:03.177 28.509 - 28.625: 99.2468% ( 1) 00:09:03.177 28.625 - 28.742: 99.2792% ( 4) 00:09:03.177 28.742 - 28.858: 99.2873% ( 1) 00:09:03.177 28.858 - 28.975: 99.2954% ( 1) 00:09:03.177 28.975 - 29.091: 99.3197% ( 3) 00:09:03.177 29.207 - 29.324: 99.3440% ( 3) 00:09:03.177 29.324 - 29.440: 99.3602% ( 2) 00:09:03.177 29.440 - 29.556: 99.3845% ( 3) 00:09:03.177 29.556 - 29.673: 99.3926% ( 1) 00:09:03.177 29.673 - 29.789: 99.4250% ( 4) 00:09:03.177 29.789 - 30.022: 99.4817% ( 7) 00:09:03.177 30.022 - 30.255: 99.5141% ( 4) 00:09:03.177 30.255 - 30.487: 99.5303% ( 2) 00:09:03.177 30.487 - 30.720: 99.5545% ( 3) 00:09:03.177 30.720 - 30.953: 99.5950% ( 5) 00:09:03.177 30.953 - 31.185: 99.6031% ( 1) 00:09:03.177 31.185 - 31.418: 99.6112% ( 1) 00:09:03.177 31.418 - 31.651: 99.6193% ( 1) 00:09:03.177 31.651 - 31.884: 99.6274% ( 1) 00:09:03.177 31.884 - 32.116: 99.6436% ( 2) 00:09:03.177 32.582 - 32.815: 99.6517% ( 1) 00:09:03.177 32.815 - 33.047: 99.6598% ( 1) 00:09:03.177 33.047 - 33.280: 99.6679% ( 1) 00:09:03.177 33.513 - 33.745: 99.7003% ( 4) 00:09:03.177 33.978 - 34.211: 99.7165% ( 2) 00:09:03.177 34.676 - 34.909: 99.7246% ( 1) 00:09:03.177 34.909 - 35.142: 99.7327% ( 1) 00:09:03.177 35.142 - 35.375: 99.7408% ( 1) 00:09:03.177 35.375 - 35.607: 99.7489% ( 1) 00:09:03.177 35.607 - 35.840: 99.7651% ( 2) 00:09:03.177 35.840 - 36.073: 99.7732% ( 1) 00:09:03.177 36.538 - 36.771: 99.7813% ( 1) 00:09:03.177 37.004 - 37.236: 99.7894% ( 1) 00:09:03.177 37.469 - 37.702: 99.8218% ( 4) 00:09:03.177 37.702 - 37.935: 99.8299% ( 1) 00:09:03.177 37.935 - 38.167: 99.8380% ( 1) 00:09:03.177 38.633 - 38.865: 99.8461% ( 1) 00:09:03.177 39.331 - 39.564: 99.8623% ( 2) 00:09:03.177 42.124 - 42.356: 99.8704% ( 1) 00:09:03.177 42.356 - 42.589: 99.8785% ( 1) 00:09:03.177 42.589 - 42.822: 99.8947% ( 2) 00:09:03.177 44.218 - 44.451: 99.9028% ( 1) 00:09:03.177 45.847 - 46.080: 99.9109% ( 1) 00:09:03.177 47.476 - 47.709: 99.9190% ( 1) 00:09:03.177 52.829 - 53.062: 99.9271% ( 1) 00:09:03.177 54.924 - 55.156: 99.9352% ( 1) 00:09:03.177 57.716 - 57.949: 99.9433% ( 1) 00:09:03.177 58.182 - 58.415: 99.9514% ( 1) 00:09:03.177 60.509 - 60.975: 99.9595% ( 1) 00:09:03.177 74.007 - 74.473: 99.9676% ( 1) 00:09:03.177 81.455 - 81.920: 99.9757% ( 1) 00:09:03.177 105.193 - 105.658: 99.9838% ( 1) 00:09:03.177 110.313 - 110.778: 99.9919% ( 1) 00:09:03.177 232.727 - 233.658: 100.0000% ( 1) 00:09:03.177 00:09:03.177 Complete histogram 00:09:03.177 ================== 00:09:03.177 Range in us Cumulative Count 00:09:03.177 9.775 - 9.833: 0.0081% ( 1) 00:09:03.177 9.833 - 9.891: 0.0324% ( 3) 00:09:03.177 9.891 - 9.949: 0.2754% ( 30) 00:09:03.177 9.949 - 10.007: 2.1139% ( 227) 00:09:03.177 10.007 - 10.065: 6.9815% ( 601) 00:09:03.177 10.065 - 10.124: 17.8829% ( 1346) 00:09:03.177 10.124 - 10.182: 32.2669% ( 1776) 00:09:03.177 10.182 - 10.240: 46.1165% ( 1710) 00:09:03.177 10.240 - 10.298: 57.9574% ( 1462) 00:09:03.177 10.298 - 10.356: 65.8622% ( 976) 00:09:03.177 10.356 - 10.415: 71.2724% ( 668) 00:09:03.177 10.415 - 10.473: 74.5282% ( 402) 00:09:03.177 10.473 - 10.531: 76.6259% ( 259) 00:09:03.177 10.531 - 10.589: 77.7679% ( 141) 00:09:03.177 10.589 - 10.647: 78.5940% ( 102) 00:09:03.177 10.647 - 10.705: 78.9423% ( 43) 00:09:03.177 10.705 - 10.764: 79.2824% ( 42) 00:09:03.177 10.764 - 10.822: 79.5011% ( 27) 00:09:03.177 10.822 - 10.880: 79.7441% ( 30) 00:09:03.177 10.880 - 10.938: 80.0032% ( 32) 00:09:03.177 10.938 - 10.996: 80.3434% ( 42) 00:09:03.177 10.996 - 11.055: 80.7403% ( 49) 00:09:03.177 11.055 - 11.113: 81.3396% ( 74) 00:09:03.177 11.113 - 11.171: 81.9389% ( 74) 00:09:03.177 11.171 - 11.229: 82.5869% ( 80) 00:09:03.177 11.229 - 11.287: 83.3320% ( 92) 00:09:03.177 11.287 - 11.345: 83.9394% ( 75) 00:09:03.177 11.345 - 11.404: 84.3930% ( 56) 00:09:03.177 11.404 - 11.462: 84.8060% ( 51) 00:09:03.177 11.462 - 11.520: 85.0409% ( 29) 00:09:03.177 11.520 - 11.578: 85.3325% ( 36) 00:09:03.177 11.578 - 11.636: 85.4945% ( 20) 00:09:03.177 11.636 - 11.695: 85.6969% ( 25) 00:09:03.177 11.695 - 11.753: 85.9075% ( 26) 00:09:03.177 11.753 - 11.811: 86.0452% ( 17) 00:09:03.177 11.811 - 11.869: 86.1991% ( 19) 00:09:03.177 11.869 - 11.927: 86.3287% ( 16) 00:09:03.177 11.927 - 11.985: 86.4016% ( 9) 00:09:03.177 11.985 - 12.044: 86.4501% ( 6) 00:09:03.177 12.044 - 12.102: 86.5878% ( 17) 00:09:03.177 12.102 - 12.160: 86.7336% ( 18) 00:09:03.177 12.160 - 12.218: 86.7822% ( 6) 00:09:03.177 12.218 - 12.276: 86.8875% ( 13) 00:09:03.177 12.276 - 12.335: 87.0495% ( 20) 00:09:03.177 12.335 - 12.393: 87.1629% ( 14) 00:09:03.177 12.393 - 12.451: 87.3006% ( 17) 00:09:03.177 12.451 - 12.509: 87.4301% ( 16) 00:09:03.177 12.509 - 12.567: 87.5921% ( 20) 00:09:03.177 12.567 - 12.625: 87.7055% ( 14) 00:09:03.177 12.625 - 12.684: 87.7946% ( 11) 00:09:03.177 12.684 - 12.742: 87.9242% ( 16) 00:09:03.177 12.742 - 12.800: 88.0781% ( 19) 00:09:03.177 12.800 - 12.858: 88.2158% ( 17) 00:09:03.177 12.858 - 12.916: 88.3210% ( 13) 00:09:03.177 12.916 - 12.975: 88.4182% ( 12) 00:09:03.177 12.975 - 13.033: 88.4830% ( 8) 00:09:03.177 13.033 - 13.091: 88.5478% ( 8) 00:09:03.177 13.091 - 13.149: 88.6045% ( 7) 00:09:03.177 13.149 - 13.207: 88.6450% ( 5) 00:09:03.177 13.207 - 13.265: 88.6855% ( 5) 00:09:03.177 13.265 - 13.324: 88.7098% ( 3) 00:09:03.177 13.324 - 13.382: 88.7665% ( 7) 00:09:03.177 13.382 - 13.440: 88.8070% ( 5) 00:09:03.177 13.440 - 13.498: 88.8151% ( 1) 00:09:03.177 13.498 - 13.556: 88.8718% ( 7) 00:09:03.177 13.556 - 13.615: 88.9042% ( 4) 00:09:03.177 13.615 - 13.673: 88.9204% ( 2) 00:09:03.177 13.673 - 13.731: 88.9366% ( 2) 00:09:03.178 13.731 - 13.789: 88.9852% ( 6) 00:09:03.178 13.789 - 13.847: 89.0095% ( 3) 00:09:03.178 13.847 - 13.905: 89.0662% ( 7) 00:09:03.178 13.905 - 13.964: 89.0986% ( 4) 00:09:03.178 13.964 - 14.022: 89.1553% ( 7) 00:09:03.178 14.080 - 14.138: 89.2120% ( 7) 00:09:03.178 14.138 - 14.196: 89.2363% ( 3) 00:09:03.178 14.196 - 14.255: 89.2767% ( 5) 00:09:03.178 14.255 - 14.313: 89.2848% ( 1) 00:09:03.178 14.313 - 14.371: 89.3172% ( 4) 00:09:03.178 14.371 - 14.429: 89.3577% ( 5) 00:09:03.178 14.429 - 14.487: 89.3739% ( 2) 00:09:03.178 14.487 - 14.545: 89.4387% ( 8) 00:09:03.178 14.545 - 14.604: 89.4630% ( 3) 00:09:03.178 14.604 - 14.662: 89.4873% ( 3) 00:09:03.178 14.662 - 14.720: 89.5359% ( 6) 00:09:03.178 14.778 - 14.836: 89.6007% ( 8) 00:09:03.178 14.836 - 14.895: 89.6169% ( 2) 00:09:03.178 14.895 - 15.011: 89.6655% ( 6) 00:09:03.178 15.011 - 15.127: 89.7465% ( 10) 00:09:03.178 15.127 - 15.244: 89.8275% ( 10) 00:09:03.178 15.244 - 15.360: 89.9247% ( 12) 00:09:03.178 15.360 - 15.476: 90.0300% ( 13) 00:09:03.178 15.476 - 15.593: 90.1272% ( 12) 00:09:03.178 15.593 - 15.709: 90.2324% ( 13) 00:09:03.178 15.709 - 15.825: 90.4592% ( 28) 00:09:03.178 15.825 - 15.942: 90.7670% ( 38) 00:09:03.178 15.942 - 16.058: 91.7146% ( 117) 00:09:03.178 16.058 - 16.175: 92.8728% ( 143) 00:09:03.178 16.175 - 16.291: 93.8366% ( 119) 00:09:03.178 16.291 - 16.407: 94.4278% ( 73) 00:09:03.178 16.407 - 16.524: 94.7599% ( 41) 00:09:03.178 16.524 - 16.640: 95.0514% ( 36) 00:09:03.178 16.640 - 16.756: 95.2053% ( 19) 00:09:03.178 16.756 - 16.873: 95.3511% ( 18) 00:09:03.178 16.873 - 16.989: 95.5374% ( 23) 00:09:03.178 16.989 - 17.105: 95.7318% ( 24) 00:09:03.178 17.105 - 17.222: 95.8775% ( 18) 00:09:03.178 17.222 - 17.338: 95.9909% ( 14) 00:09:03.178 17.338 - 17.455: 96.1286% ( 17) 00:09:03.178 17.455 - 17.571: 96.2096% ( 10) 00:09:03.178 17.571 - 17.687: 96.2825% ( 9) 00:09:03.178 17.687 - 17.804: 96.3716% ( 11) 00:09:03.178 17.804 - 17.920: 96.4445% ( 9) 00:09:03.178 17.920 - 18.036: 96.5984% ( 19) 00:09:03.178 18.036 - 18.153: 96.7199% ( 15) 00:09:03.178 18.153 - 18.269: 96.8089% ( 11) 00:09:03.178 18.269 - 18.385: 96.8980% ( 11) 00:09:03.178 18.385 - 18.502: 97.0600% ( 20) 00:09:03.178 18.502 - 18.618: 97.1167% ( 7) 00:09:03.178 18.618 - 18.735: 97.2139% ( 12) 00:09:03.178 18.735 - 18.851: 97.3192% ( 13) 00:09:03.178 18.851 - 18.967: 97.4164% ( 12) 00:09:03.178 18.967 - 19.084: 97.5460% ( 16) 00:09:03.178 19.084 - 19.200: 97.6432% ( 12) 00:09:03.178 19.200 - 19.316: 97.6755% ( 4) 00:09:03.178 19.316 - 19.433: 97.8051% ( 16) 00:09:03.178 19.433 - 19.549: 97.9023% ( 12) 00:09:03.178 19.549 - 19.665: 97.9509% ( 6) 00:09:03.178 19.665 - 19.782: 98.0481% ( 12) 00:09:03.178 19.782 - 19.898: 98.1210% ( 9) 00:09:03.178 19.898 - 20.015: 98.1534% ( 4) 00:09:03.178 20.015 - 20.131: 98.1858% ( 4) 00:09:03.178 20.131 - 20.247: 98.2182% ( 4) 00:09:03.178 20.247 - 20.364: 98.2344% ( 2) 00:09:03.178 20.364 - 20.480: 98.2830% ( 6) 00:09:03.178 20.480 - 20.596: 98.3397% ( 7) 00:09:03.178 20.596 - 20.713: 98.3721% ( 4) 00:09:03.178 20.713 - 20.829: 98.4126% ( 5) 00:09:03.178 20.829 - 20.945: 98.4612% ( 6) 00:09:03.178 20.945 - 21.062: 98.5503% ( 11) 00:09:03.178 21.062 - 21.178: 98.6069% ( 7) 00:09:03.178 21.178 - 21.295: 98.6231% ( 2) 00:09:03.178 21.295 - 21.411: 98.6555% ( 4) 00:09:03.178 21.411 - 21.527: 98.7203% ( 8) 00:09:03.178 21.527 - 21.644: 98.7932% ( 9) 00:09:03.178 21.644 - 21.760: 98.8418% ( 6) 00:09:03.178 21.760 - 21.876: 98.8661% ( 3) 00:09:03.178 21.876 - 21.993: 98.9066% ( 5) 00:09:03.178 21.993 - 22.109: 98.9552% ( 6) 00:09:03.178 22.109 - 22.225: 98.9714% ( 2) 00:09:03.178 22.225 - 22.342: 98.9876% ( 2) 00:09:03.178 22.342 - 22.458: 99.0362% ( 6) 00:09:03.178 22.458 - 22.575: 99.0767% ( 5) 00:09:03.178 22.575 - 22.691: 99.0848% ( 1) 00:09:03.178 22.691 - 22.807: 99.1334% ( 6) 00:09:03.178 22.807 - 22.924: 99.1577% ( 3) 00:09:03.178 22.924 - 23.040: 99.1739% ( 2) 00:09:03.178 23.040 - 23.156: 99.2063% ( 4) 00:09:03.178 23.156 - 23.273: 99.2387% ( 4) 00:09:03.178 23.273 - 23.389: 99.2468% ( 1) 00:09:03.178 23.505 - 23.622: 99.2549% ( 1) 00:09:03.178 23.622 - 23.738: 99.2630% ( 1) 00:09:03.178 23.855 - 23.971: 99.2792% ( 2) 00:09:03.178 23.971 - 24.087: 99.2873% ( 1) 00:09:03.178 24.087 - 24.204: 99.3035% ( 2) 00:09:03.178 24.204 - 24.320: 99.3359% ( 4) 00:09:03.178 24.320 - 24.436: 99.3521% ( 2) 00:09:03.178 24.553 - 24.669: 99.3602% ( 1) 00:09:03.178 24.669 - 24.785: 99.3683% ( 1) 00:09:03.178 24.785 - 24.902: 99.3764% ( 1) 00:09:03.178 25.018 - 25.135: 99.3845% ( 1) 00:09:03.178 25.251 - 25.367: 99.3926% ( 1) 00:09:03.178 25.367 - 25.484: 99.4169% ( 3) 00:09:03.178 25.484 - 25.600: 99.4250% ( 1) 00:09:03.178 25.833 - 25.949: 99.4412% ( 2) 00:09:03.178 25.949 - 26.065: 99.4493% ( 1) 00:09:03.178 26.065 - 26.182: 99.4736% ( 3) 00:09:03.178 26.298 - 26.415: 99.4898% ( 2) 00:09:03.178 26.531 - 26.647: 99.5141% ( 3) 00:09:03.178 26.647 - 26.764: 99.5303% ( 2) 00:09:03.178 26.764 - 26.880: 99.5383% ( 1) 00:09:03.178 26.880 - 26.996: 99.5545% ( 2) 00:09:03.178 26.996 - 27.113: 99.5707% ( 2) 00:09:03.178 27.113 - 27.229: 99.5788% ( 1) 00:09:03.178 27.229 - 27.345: 99.6031% ( 3) 00:09:03.178 27.345 - 27.462: 99.6112% ( 1) 00:09:03.178 27.462 - 27.578: 99.6193% ( 1) 00:09:03.178 27.578 - 27.695: 99.6274% ( 1) 00:09:03.178 27.927 - 28.044: 99.6355% ( 1) 00:09:03.178 28.044 - 28.160: 99.6517% ( 2) 00:09:03.178 28.160 - 28.276: 99.6679% ( 2) 00:09:03.178 28.393 - 28.509: 99.6760% ( 1) 00:09:03.178 28.509 - 28.625: 99.6841% ( 1) 00:09:03.178 28.625 - 28.742: 99.7003% ( 2) 00:09:03.178 28.742 - 28.858: 99.7084% ( 1) 00:09:03.178 28.975 - 29.091: 99.7165% ( 1) 00:09:03.178 29.091 - 29.207: 99.7246% ( 1) 00:09:03.178 29.207 - 29.324: 99.7489% ( 3) 00:09:03.178 29.440 - 29.556: 99.7570% ( 1) 00:09:03.178 29.673 - 29.789: 99.7651% ( 1) 00:09:03.178 29.789 - 30.022: 99.7813% ( 2) 00:09:03.178 30.022 - 30.255: 99.7894% ( 1) 00:09:03.178 30.487 - 30.720: 99.7975% ( 1) 00:09:03.178 30.953 - 31.185: 99.8056% ( 1) 00:09:03.178 31.185 - 31.418: 99.8218% ( 2) 00:09:03.178 31.884 - 32.116: 99.8299% ( 1) 00:09:03.178 32.349 - 32.582: 99.8380% ( 1) 00:09:03.178 32.582 - 32.815: 99.8461% ( 1) 00:09:03.178 32.815 - 33.047: 99.8785% ( 4) 00:09:03.178 33.280 - 33.513: 99.8866% ( 1) 00:09:03.178 33.745 - 33.978: 99.9109% ( 3) 00:09:03.178 34.211 - 34.444: 99.9190% ( 1) 00:09:03.178 34.676 - 34.909: 99.9271% ( 1) 00:09:03.178 35.375 - 35.607: 99.9352% ( 1) 00:09:03.178 35.840 - 36.073: 99.9433% ( 1) 00:09:03.178 36.073 - 36.305: 99.9514% ( 1) 00:09:03.178 37.004 - 37.236: 99.9595% ( 1) 00:09:03.178 38.167 - 38.400: 99.9757% ( 2) 00:09:03.178 43.520 - 43.753: 99.9838% ( 1) 00:09:03.178 53.062 - 53.295: 99.9919% ( 1) 00:09:03.178 102.400 - 102.865: 100.0000% ( 1) 00:09:03.178 00:09:03.178 00:09:03.178 real 0m1.289s 00:09:03.178 user 0m1.095s 00:09:03.178 sys 0m0.140s 00:09:03.178 07:49:04 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.178 07:49:04 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:03.178 ************************************ 00:09:03.178 END TEST nvme_overhead 00:09:03.178 ************************************ 00:09:03.178 07:49:05 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:03.178 07:49:05 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:03.178 07:49:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.178 07:49:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.178 ************************************ 00:09:03.178 START TEST nvme_arbitration 00:09:03.178 ************************************ 00:09:03.178 07:49:05 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:06.462 Initializing NVMe Controllers 00:09:06.462 Attached to 0000:00:10.0 00:09:06.462 Attached to 0000:00:11.0 00:09:06.462 Attached to 0000:00:13.0 00:09:06.462 Attached to 0000:00:12.0 00:09:06.462 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:06.462 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:06.462 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:06.462 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:06.462 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:06.462 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:06.462 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:06.462 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:06.462 Initialization complete. Launching workers. 00:09:06.462 Starting thread on core 1 with urgent priority queue 00:09:06.462 Starting thread on core 2 with urgent priority queue 00:09:06.462 Starting thread on core 3 with urgent priority queue 00:09:06.463 Starting thread on core 0 with urgent priority queue 00:09:06.463 QEMU NVMe Ctrl (12340 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:09:06.463 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:09:06.463 QEMU NVMe Ctrl (12341 ) core 1: 746.67 IO/s 133.93 secs/100000 ios 00:09:06.463 QEMU NVMe Ctrl (12342 ) core 1: 746.67 IO/s 133.93 secs/100000 ios 00:09:06.463 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:09:06.463 QEMU NVMe Ctrl (12342 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:09:06.463 ======================================================== 00:09:06.463 00:09:06.463 00:09:06.463 real 0m3.434s 00:09:06.463 user 0m9.317s 00:09:06.463 sys 0m0.190s 00:09:06.463 07:49:08 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.463 07:49:08 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 ************************************ 00:09:06.463 END TEST nvme_arbitration 00:09:06.463 ************************************ 00:09:06.720 07:49:08 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:06.720 07:49:08 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:06.720 07:49:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.720 07:49:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.720 ************************************ 00:09:06.720 START TEST nvme_single_aen 00:09:06.720 ************************************ 00:09:06.720 07:49:08 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:06.979 Asynchronous Event Request test 00:09:06.979 Attached to 0000:00:10.0 00:09:06.979 Attached to 0000:00:11.0 00:09:06.979 Attached to 0000:00:13.0 00:09:06.979 Attached to 0000:00:12.0 00:09:06.979 Reset controller to setup AER completions for this process 00:09:06.979 Registering asynchronous event callbacks... 00:09:06.979 Getting orig temperature thresholds of all controllers 00:09:06.979 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.979 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.979 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.979 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.979 Setting all controllers temperature threshold low to trigger AER 00:09:06.979 Waiting for all controllers temperature threshold to be set lower 00:09:06.979 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.979 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:06.979 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.979 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:06.979 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.979 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:06.979 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.979 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:06.979 Waiting for all controllers to trigger AER and reset threshold 00:09:06.979 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.979 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.979 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.979 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.979 Cleaning up... 00:09:06.979 00:09:06.979 real 0m0.320s 00:09:06.979 user 0m0.117s 00:09:06.979 sys 0m0.148s 00:09:06.979 07:49:08 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.979 07:49:08 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:06.979 ************************************ 00:09:06.979 END TEST nvme_single_aen 00:09:06.979 ************************************ 00:09:06.979 07:49:08 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:06.979 07:49:08 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.979 07:49:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.979 07:49:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.979 ************************************ 00:09:06.979 START TEST nvme_doorbell_aers 00:09:06.979 ************************************ 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:06.979 07:49:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:07.545 [2024-10-09 07:49:09.262731] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:17.569 Executing: test_write_invalid_db 00:09:17.569 Waiting for AER completion... 00:09:17.569 Failure: test_write_invalid_db 00:09:17.569 00:09:17.569 Executing: test_invalid_db_write_overflow_sq 00:09:17.569 Waiting for AER completion... 00:09:17.569 Failure: test_invalid_db_write_overflow_sq 00:09:17.569 00:09:17.569 Executing: test_invalid_db_write_overflow_cq 00:09:17.569 Waiting for AER completion... 00:09:17.569 Failure: test_invalid_db_write_overflow_cq 00:09:17.569 00:09:17.569 07:49:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:17.569 07:49:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:17.569 [2024-10-09 07:49:19.339257] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:27.543 Executing: test_write_invalid_db 00:09:27.543 Waiting for AER completion... 00:09:27.543 Failure: test_write_invalid_db 00:09:27.543 00:09:27.543 Executing: test_invalid_db_write_overflow_sq 00:09:27.543 Waiting for AER completion... 00:09:27.543 Failure: test_invalid_db_write_overflow_sq 00:09:27.543 00:09:27.543 Executing: test_invalid_db_write_overflow_cq 00:09:27.543 Waiting for AER completion... 00:09:27.543 Failure: test_invalid_db_write_overflow_cq 00:09:27.543 00:09:27.543 07:49:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:27.543 07:49:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:27.543 [2024-10-09 07:49:29.353275] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:37.513 Executing: test_write_invalid_db 00:09:37.513 Waiting for AER completion... 00:09:37.513 Failure: test_write_invalid_db 00:09:37.513 00:09:37.513 Executing: test_invalid_db_write_overflow_sq 00:09:37.513 Waiting for AER completion... 00:09:37.513 Failure: test_invalid_db_write_overflow_sq 00:09:37.513 00:09:37.513 Executing: test_invalid_db_write_overflow_cq 00:09:37.513 Waiting for AER completion... 00:09:37.513 Failure: test_invalid_db_write_overflow_cq 00:09:37.513 00:09:37.513 07:49:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:37.513 07:49:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:37.513 [2024-10-09 07:49:39.419026] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 Executing: test_write_invalid_db 00:09:47.500 Waiting for AER completion... 00:09:47.500 Failure: test_write_invalid_db 00:09:47.500 00:09:47.500 Executing: test_invalid_db_write_overflow_sq 00:09:47.500 Waiting for AER completion... 00:09:47.500 Failure: test_invalid_db_write_overflow_sq 00:09:47.500 00:09:47.500 Executing: test_invalid_db_write_overflow_cq 00:09:47.500 Waiting for AER completion... 00:09:47.500 Failure: test_invalid_db_write_overflow_cq 00:09:47.500 00:09:47.500 ************************************ 00:09:47.500 END TEST nvme_doorbell_aers 00:09:47.500 ************************************ 00:09:47.500 00:09:47.500 real 0m40.264s 00:09:47.500 user 0m34.145s 00:09:47.500 sys 0m5.706s 00:09:47.500 07:49:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.500 07:49:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:47.500 07:49:49 nvme -- nvme/nvme.sh@97 -- # uname 00:09:47.500 07:49:49 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:47.500 07:49:49 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:47.500 07:49:49 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:47.500 07:49:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.500 07:49:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.500 ************************************ 00:09:47.500 START TEST nvme_multi_aen 00:09:47.500 ************************************ 00:09:47.500 07:49:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:47.500 [2024-10-09 07:49:49.443526] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.443628] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.443652] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.445437] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.445488] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.445508] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.446833] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.446876] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.446894] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.448559] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.448777] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 [2024-10-09 07:49:49.449193] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65134) is not found. Dropping the request. 00:09:47.500 Child process pid: 65650 00:09:47.759 [Child] Asynchronous Event Request test 00:09:47.759 [Child] Attached to 0000:00:10.0 00:09:47.759 [Child] Attached to 0000:00:11.0 00:09:47.759 [Child] Attached to 0000:00:13.0 00:09:47.759 [Child] Attached to 0000:00:12.0 00:09:47.759 [Child] Registering asynchronous event callbacks... 00:09:47.759 [Child] Getting orig temperature thresholds of all controllers 00:09:47.759 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:47.759 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:47.759 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:47.759 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:47.759 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:47.759 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:47.759 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:47.759 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:47.759 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:47.759 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.759 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.759 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.759 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.759 [Child] Cleaning up... 00:09:48.017 Asynchronous Event Request test 00:09:48.017 Attached to 0000:00:10.0 00:09:48.017 Attached to 0000:00:11.0 00:09:48.017 Attached to 0000:00:13.0 00:09:48.017 Attached to 0000:00:12.0 00:09:48.017 Reset controller to setup AER completions for this process 00:09:48.017 Registering asynchronous event callbacks... 00:09:48.017 Getting orig temperature thresholds of all controllers 00:09:48.017 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.017 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.017 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.017 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.017 Setting all controllers temperature threshold low to trigger AER 00:09:48.017 Waiting for all controllers temperature threshold to be set lower 00:09:48.017 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.017 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:48.017 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.017 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:48.017 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.017 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:48.017 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.017 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:48.017 Waiting for all controllers to trigger AER and reset threshold 00:09:48.017 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.017 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.018 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.018 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.018 Cleaning up... 00:09:48.018 00:09:48.018 real 0m0.629s 00:09:48.018 user 0m0.204s 00:09:48.018 sys 0m0.307s 00:09:48.018 07:49:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.018 07:49:49 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:48.018 ************************************ 00:09:48.018 END TEST nvme_multi_aen 00:09:48.018 ************************************ 00:09:48.018 07:49:49 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:48.018 07:49:49 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:48.018 07:49:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.018 07:49:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:48.018 ************************************ 00:09:48.018 START TEST nvme_startup 00:09:48.018 ************************************ 00:09:48.018 07:49:49 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:48.276 Initializing NVMe Controllers 00:09:48.276 Attached to 0000:00:10.0 00:09:48.276 Attached to 0000:00:11.0 00:09:48.276 Attached to 0000:00:13.0 00:09:48.276 Attached to 0000:00:12.0 00:09:48.276 Initialization complete. 00:09:48.276 Time used:205834.859 (us). 00:09:48.276 ************************************ 00:09:48.276 END TEST nvme_startup 00:09:48.276 ************************************ 00:09:48.276 00:09:48.276 real 0m0.292s 00:09:48.276 user 0m0.100s 00:09:48.276 sys 0m0.147s 00:09:48.276 07:49:50 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.276 07:49:50 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:48.276 07:49:50 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:48.276 07:49:50 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.276 07:49:50 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.276 07:49:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:48.276 ************************************ 00:09:48.276 START TEST nvme_multi_secondary 00:09:48.276 ************************************ 00:09:48.276 07:49:50 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:48.276 07:49:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65706 00:09:48.276 07:49:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:48.276 07:49:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65707 00:09:48.276 07:49:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:48.276 07:49:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:51.559 Initializing NVMe Controllers 00:09:51.559 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.559 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.559 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.559 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.559 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:51.559 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:51.559 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:51.559 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:51.559 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:51.560 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:51.560 Initialization complete. Launching workers. 00:09:51.560 ======================================================== 00:09:51.560 Latency(us) 00:09:51.560 Device Information : IOPS MiB/s Average min max 00:09:51.560 PCIE (0000:00:10.0) NSID 1 from core 2: 2168.37 8.47 7375.53 1085.61 24477.66 00:09:51.560 PCIE (0000:00:11.0) NSID 1 from core 2: 2168.37 8.47 7378.82 1099.39 28635.02 00:09:51.560 PCIE (0000:00:13.0) NSID 1 from core 2: 2168.37 8.47 7378.68 902.87 29310.53 00:09:51.560 PCIE (0000:00:12.0) NSID 1 from core 2: 2168.37 8.47 7379.07 1150.59 25344.22 00:09:51.560 PCIE (0000:00:12.0) NSID 2 from core 2: 2168.37 8.47 7379.68 1107.73 25005.59 00:09:51.560 PCIE (0000:00:12.0) NSID 3 from core 2: 2168.37 8.47 7386.20 1103.18 24869.00 00:09:51.560 ======================================================== 00:09:51.560 Total : 13010.20 50.82 7379.66 902.87 29310.53 00:09:51.560 00:09:51.819 07:49:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65706 00:09:51.819 Initializing NVMe Controllers 00:09:51.819 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.819 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.819 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.819 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.819 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:51.819 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:51.819 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:51.819 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:51.819 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:51.819 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:51.819 Initialization complete. Launching workers. 00:09:51.819 ======================================================== 00:09:51.819 Latency(us) 00:09:51.819 Device Information : IOPS MiB/s Average min max 00:09:51.819 PCIE (0000:00:10.0) NSID 1 from core 1: 4691.51 18.33 3408.13 1286.35 11789.83 00:09:51.819 PCIE (0000:00:11.0) NSID 1 from core 1: 4691.51 18.33 3409.58 1362.19 11618.16 00:09:51.819 PCIE (0000:00:13.0) NSID 1 from core 1: 4691.51 18.33 3409.38 1322.05 11686.27 00:09:51.819 PCIE (0000:00:12.0) NSID 1 from core 1: 4691.51 18.33 3409.14 1277.93 11472.76 00:09:51.819 PCIE (0000:00:12.0) NSID 2 from core 1: 4691.51 18.33 3408.91 1307.92 11481.01 00:09:51.819 PCIE (0000:00:12.0) NSID 3 from core 1: 4691.51 18.33 3408.69 1294.29 11536.15 00:09:51.819 ======================================================== 00:09:51.819 Total : 28149.08 109.96 3408.97 1277.93 11789.83 00:09:51.819 00:09:53.719 Initializing NVMe Controllers 00:09:53.719 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.719 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.719 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.719 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.719 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:53.719 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:53.719 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:53.719 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:53.719 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:53.719 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:53.719 Initialization complete. Launching workers. 00:09:53.719 ======================================================== 00:09:53.719 Latency(us) 00:09:53.719 Device Information : IOPS MiB/s Average min max 00:09:53.719 PCIE (0000:00:10.0) NSID 1 from core 0: 7564.30 29.55 2113.63 944.78 13380.08 00:09:53.719 PCIE (0000:00:11.0) NSID 1 from core 0: 7564.30 29.55 2114.68 955.45 13661.44 00:09:53.719 PCIE (0000:00:13.0) NSID 1 from core 0: 7564.30 29.55 2114.63 919.33 12117.54 00:09:53.719 PCIE (0000:00:12.0) NSID 1 from core 0: 7564.30 29.55 2114.58 837.76 12378.04 00:09:53.719 PCIE (0000:00:12.0) NSID 2 from core 0: 7564.30 29.55 2114.52 777.25 12475.99 00:09:53.719 PCIE (0000:00:12.0) NSID 3 from core 0: 7564.30 29.55 2114.48 733.56 12857.46 00:09:53.719 ======================================================== 00:09:53.719 Total : 45385.79 177.29 2114.42 733.56 13661.44 00:09:53.719 00:09:53.719 07:49:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65707 00:09:53.719 07:49:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65782 00:09:53.719 07:49:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:53.719 07:49:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65783 00:09:53.719 07:49:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:53.719 07:49:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:57.004 Initializing NVMe Controllers 00:09:57.004 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:57.004 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:57.004 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:57.004 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:57.004 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:57.004 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:57.004 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:57.004 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:57.004 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:57.004 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:57.004 Initialization complete. Launching workers. 00:09:57.004 ======================================================== 00:09:57.004 Latency(us) 00:09:57.004 Device Information : IOPS MiB/s Average min max 00:09:57.004 PCIE (0000:00:10.0) NSID 1 from core 1: 4833.61 18.88 3308.23 1114.86 7391.33 00:09:57.004 PCIE (0000:00:11.0) NSID 1 from core 1: 4833.61 18.88 3310.18 1110.62 7657.24 00:09:57.004 PCIE (0000:00:13.0) NSID 1 from core 1: 4833.61 18.88 3310.21 1111.87 7622.81 00:09:57.004 PCIE (0000:00:12.0) NSID 1 from core 1: 4833.61 18.88 3310.25 1118.61 8213.52 00:09:57.004 PCIE (0000:00:12.0) NSID 2 from core 1: 4833.61 18.88 3310.25 1107.47 8253.96 00:09:57.004 PCIE (0000:00:12.0) NSID 3 from core 1: 4833.61 18.88 3310.33 1135.47 8554.37 00:09:57.004 ======================================================== 00:09:57.004 Total : 29001.69 113.29 3309.91 1107.47 8554.37 00:09:57.004 00:09:57.262 Initializing NVMe Controllers 00:09:57.262 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:57.262 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:57.262 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:57.262 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:57.262 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:57.263 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:57.263 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:57.263 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:57.263 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:57.263 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:57.263 Initialization complete. Launching workers. 00:09:57.263 ======================================================== 00:09:57.263 Latency(us) 00:09:57.263 Device Information : IOPS MiB/s Average min max 00:09:57.263 PCIE (0000:00:10.0) NSID 1 from core 0: 4833.96 18.88 3307.76 1405.59 9502.93 00:09:57.263 PCIE (0000:00:11.0) NSID 1 from core 0: 4833.96 18.88 3309.00 1412.82 9682.68 00:09:57.263 PCIE (0000:00:13.0) NSID 1 from core 0: 4833.96 18.88 3308.62 1455.18 9804.15 00:09:57.263 PCIE (0000:00:12.0) NSID 1 from core 0: 4833.96 18.88 3308.39 1476.55 9810.54 00:09:57.263 PCIE (0000:00:12.0) NSID 2 from core 0: 4833.96 18.88 3308.18 1392.63 9662.34 00:09:57.263 PCIE (0000:00:12.0) NSID 3 from core 0: 4833.96 18.88 3307.88 1475.94 9865.42 00:09:57.263 ======================================================== 00:09:57.263 Total : 29003.76 113.30 3308.31 1392.63 9865.42 00:09:57.263 00:09:59.821 Initializing NVMe Controllers 00:09:59.821 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:59.821 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:59.821 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:59.821 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:59.821 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:59.821 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:59.821 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:59.821 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:59.821 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:59.821 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:59.821 Initialization complete. Launching workers. 00:09:59.821 ======================================================== 00:09:59.821 Latency(us) 00:09:59.821 Device Information : IOPS MiB/s Average min max 00:09:59.821 PCIE (0000:00:10.0) NSID 1 from core 2: 3369.17 13.16 4746.00 1035.04 18440.65 00:09:59.821 PCIE (0000:00:11.0) NSID 1 from core 2: 3369.17 13.16 4748.15 1043.52 19203.06 00:09:59.821 PCIE (0000:00:13.0) NSID 1 from core 2: 3369.17 13.16 4748.13 1020.77 17081.28 00:09:59.821 PCIE (0000:00:12.0) NSID 1 from core 2: 3369.17 13.16 4747.51 1032.11 18675.91 00:09:59.821 PCIE (0000:00:12.0) NSID 2 from core 2: 3369.17 13.16 4747.90 944.93 18555.67 00:09:59.821 PCIE (0000:00:12.0) NSID 3 from core 2: 3369.17 13.16 4747.12 851.12 18299.14 00:09:59.821 ======================================================== 00:09:59.821 Total : 20215.03 78.96 4747.47 851.12 19203.06 00:09:59.821 00:09:59.821 ************************************ 00:09:59.821 END TEST nvme_multi_secondary 00:09:59.821 ************************************ 00:09:59.821 07:50:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65782 00:09:59.821 07:50:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65783 00:09:59.821 00:09:59.821 real 0m11.118s 00:09:59.821 user 0m18.561s 00:09:59.821 sys 0m0.985s 00:09:59.821 07:50:01 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.821 07:50:01 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:59.821 07:50:01 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:59.821 07:50:01 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64715 ]] 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@1090 -- # kill 64715 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@1091 -- # wait 64715 00:09:59.821 [2024-10-09 07:50:01.375214] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.375645] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.375707] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.375747] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.378655] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.378742] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.378779] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.378814] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.381941] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.382025] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.382059] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.382090] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.385215] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.385311] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.385371] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 [2024-10-09 07:50:01.385410] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65649) is not found. Dropping the request. 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:59.821 07:50:01 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.821 07:50:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.821 ************************************ 00:09:59.821 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:59.821 ************************************ 00:09:59.821 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:59.821 * Looking for test storage... 00:09:59.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.821 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:59.821 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:59.821 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:00.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.119 --rc genhtml_branch_coverage=1 00:10:00.119 --rc genhtml_function_coverage=1 00:10:00.119 --rc genhtml_legend=1 00:10:00.119 --rc geninfo_all_blocks=1 00:10:00.119 --rc geninfo_unexecuted_blocks=1 00:10:00.119 00:10:00.119 ' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:00.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.119 --rc genhtml_branch_coverage=1 00:10:00.119 --rc genhtml_function_coverage=1 00:10:00.119 --rc genhtml_legend=1 00:10:00.119 --rc geninfo_all_blocks=1 00:10:00.119 --rc geninfo_unexecuted_blocks=1 00:10:00.119 00:10:00.119 ' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:00.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.119 --rc genhtml_branch_coverage=1 00:10:00.119 --rc genhtml_function_coverage=1 00:10:00.119 --rc genhtml_legend=1 00:10:00.119 --rc geninfo_all_blocks=1 00:10:00.119 --rc geninfo_unexecuted_blocks=1 00:10:00.119 00:10:00.119 ' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:00.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.119 --rc genhtml_branch_coverage=1 00:10:00.119 --rc genhtml_function_coverage=1 00:10:00.119 --rc genhtml_legend=1 00:10:00.119 --rc geninfo_all_blocks=1 00:10:00.119 --rc geninfo_unexecuted_blocks=1 00:10:00.119 00:10:00.119 ' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65949 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65949 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 65949 ']' 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.119 07:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:00.378 [2024-10-09 07:50:02.153033] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:10:00.378 [2024-10-09 07:50:02.153271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65949 ] 00:10:00.378 [2024-10-09 07:50:02.355208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.637 [2024-10-09 07:50:02.547820] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.637 [2024-10-09 07:50:02.547910] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.637 [2024-10-09 07:50:02.548000] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.637 [2024-10-09 07:50:02.548246] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:01.571 nvme0n1 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_8Ampe.txt 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:01.571 true 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728460203 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65972 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:01.571 07:50:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:03.572 [2024-10-09 07:50:05.422405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:03.572 [2024-10-09 07:50:05.422786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:03.572 [2024-10-09 07:50:05.422824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:03.572 [2024-10-09 07:50:05.422844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.572 [2024-10-09 07:50:05.424840] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:03.572 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65972 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65972 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65972 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_8Ampe.txt 00:10:03.572 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_8Ampe.txt 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65949 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 65949 ']' 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 65949 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65949 00:10:03.573 killing process with pid 65949 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65949' 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 65949 00:10:03.573 07:50:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 65949 00:10:06.117 07:50:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:06.117 07:50:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:06.117 ************************************ 00:10:06.117 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:06.117 ************************************ 00:10:06.117 00:10:06.117 real 0m6.140s 00:10:06.117 user 0m20.987s 00:10:06.117 sys 0m0.669s 00:10:06.117 07:50:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.117 07:50:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:06.117 07:50:07 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:06.117 07:50:07 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:06.117 07:50:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.117 07:50:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.117 07:50:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.117 ************************************ 00:10:06.117 START TEST nvme_fio 00:10:06.117 ************************************ 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:06.117 07:50:07 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:06.117 07:50:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:06.375 07:50:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:06.375 07:50:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:06.634 07:50:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:06.634 07:50:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:06.634 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:06.635 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:06.635 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:06.635 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:06.635 07:50:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:06.893 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:06.893 fio-3.35 00:10:06.893 Starting 1 thread 00:10:10.179 00:10:10.179 test: (groupid=0, jobs=1): err= 0: pid=66125: Wed Oct 9 07:50:11 2024 00:10:10.179 read: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(121MiB/2001msec) 00:10:10.179 slat (nsec): min=4549, max=54367, avg=6235.43, stdev=2207.18 00:10:10.179 clat (usec): min=353, max=10237, avg=4117.92, stdev=857.91 00:10:10.179 lat (usec): min=359, max=10289, avg=4124.16, stdev=858.91 00:10:10.179 clat percentiles (usec): 00:10:10.179 | 1.00th=[ 2409], 5.00th=[ 3097], 10.00th=[ 3425], 20.00th=[ 3621], 00:10:10.179 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 4113], 00:10:10.179 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 6259], 00:10:10.179 | 99.00th=[ 7177], 99.50th=[ 7504], 99.90th=[ 8586], 99.95th=[ 8717], 00:10:10.179 | 99.99th=[ 9896] 00:10:10.179 bw ( KiB/s): min=62688, max=67112, per=100.00%, avg=64514.67, stdev=2310.50, samples=3 00:10:10.179 iops : min=15672, max=16778, avg=16128.67, stdev=577.62, samples=3 00:10:10.179 write: IOPS=15.5k, BW=60.4MiB/s (63.4MB/s)(121MiB/2001msec); 0 zone resets 00:10:10.179 slat (nsec): min=4647, max=85275, avg=6403.15, stdev=2310.58 00:10:10.179 clat (usec): min=283, max=10002, avg=4130.27, stdev=853.55 00:10:10.179 lat (usec): min=287, max=10010, avg=4136.68, stdev=854.53 00:10:10.179 clat percentiles (usec): 00:10:10.179 | 1.00th=[ 2409], 5.00th=[ 3130], 10.00th=[ 3458], 20.00th=[ 3654], 00:10:10.179 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4113], 00:10:10.179 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 6259], 00:10:10.179 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8455], 99.95th=[ 8848], 00:10:10.179 | 99.99th=[ 9503] 00:10:10.179 bw ( KiB/s): min=61984, max=67296, per=100.00%, avg=64261.33, stdev=2735.78, samples=3 00:10:10.179 iops : min=15496, max=16824, avg=16065.33, stdev=683.95, samples=3 00:10:10.179 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:10.179 lat (msec) : 2=0.29%, 4=55.71%, 10=43.96%, 20=0.01% 00:10:10.179 cpu : usr=98.90%, sys=0.05%, ctx=2, majf=0, minf=608 00:10:10.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:10.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.179 issued rwts: total=30938,30960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.179 00:10:10.179 Run status group 0 (all jobs): 00:10:10.179 READ: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:10:10.179 WRITE: bw=60.4MiB/s (63.4MB/s), 60.4MiB/s-60.4MiB/s (63.4MB/s-63.4MB/s), io=121MiB (127MB), run=2001-2001msec 00:10:10.179 ----------------------------------------------------- 00:10:10.179 Suppressions used: 00:10:10.179 count bytes template 00:10:10.179 1 32 /usr/src/fio/parse.c 00:10:10.179 1 8 libtcmalloc_minimal.so 00:10:10.179 ----------------------------------------------------- 00:10:10.179 00:10:10.179 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:10.179 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:10.179 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:10.179 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:10.438 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:10.438 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:10.696 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.696 07:50:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.696 07:50:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:10.953 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:10.953 fio-3.35 00:10:10.953 Starting 1 thread 00:10:14.238 00:10:14.238 test: (groupid=0, jobs=1): err= 0: pid=66189: Wed Oct 9 07:50:15 2024 00:10:14.238 read: IOPS=14.6k, BW=57.0MiB/s (59.7MB/s)(114MiB/2001msec) 00:10:14.238 slat (nsec): min=4569, max=68060, avg=6697.65, stdev=2162.24 00:10:14.238 clat (usec): min=309, max=10750, avg=4360.19, stdev=714.67 00:10:14.238 lat (usec): min=316, max=10818, avg=4366.89, stdev=715.48 00:10:14.238 clat percentiles (usec): 00:10:14.238 | 1.00th=[ 2376], 5.00th=[ 3392], 10.00th=[ 3720], 20.00th=[ 3884], 00:10:14.238 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4490], 00:10:14.238 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5276], 00:10:14.238 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 8029], 99.95th=[ 8979], 00:10:14.238 | 99.99th=[10683] 00:10:14.238 bw ( KiB/s): min=55456, max=61408, per=99.53%, avg=58066.67, stdev=3042.53, samples=3 00:10:14.238 iops : min=13864, max=15352, avg=14516.67, stdev=760.63, samples=3 00:10:14.238 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(114MiB/2001msec); 0 zone resets 00:10:14.238 slat (nsec): min=4670, max=52128, avg=6848.63, stdev=2135.06 00:10:14.238 clat (usec): min=280, max=10584, avg=4371.10, stdev=716.32 00:10:14.238 lat (usec): min=286, max=10599, avg=4377.95, stdev=717.10 00:10:14.238 clat percentiles (usec): 00:10:14.238 | 1.00th=[ 2442], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 3884], 00:10:14.238 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4555], 00:10:14.238 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5342], 00:10:14.238 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 8094], 99.95th=[ 9241], 00:10:14.238 | 99.99th=[10421] 00:10:14.238 bw ( KiB/s): min=55240, max=61024, per=99.13%, avg=57968.00, stdev=2905.92, samples=3 00:10:14.238 iops : min=13810, max=15256, avg=14492.00, stdev=726.48, samples=3 00:10:14.238 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:14.238 lat (msec) : 2=0.43%, 4=28.50%, 10=71.00%, 20=0.02% 00:10:14.238 cpu : usr=98.80%, sys=0.10%, ctx=5, majf=0, minf=608 00:10:14.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:14.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.238 issued rwts: total=29186,29253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.238 00:10:14.238 Run status group 0 (all jobs): 00:10:14.238 READ: bw=57.0MiB/s (59.7MB/s), 57.0MiB/s-57.0MiB/s (59.7MB/s-59.7MB/s), io=114MiB (120MB), run=2001-2001msec 00:10:14.238 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=114MiB (120MB), run=2001-2001msec 00:10:14.238 ----------------------------------------------------- 00:10:14.238 Suppressions used: 00:10:14.238 count bytes template 00:10:14.238 1 32 /usr/src/fio/parse.c 00:10:14.238 1 8 libtcmalloc_minimal.so 00:10:14.238 ----------------------------------------------------- 00:10:14.238 00:10:14.238 07:50:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:14.238 07:50:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:14.238 07:50:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:14.238 07:50:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:14.497 07:50:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:14.497 07:50:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:14.755 07:50:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:14.755 07:50:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:14.755 07:50:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:14.755 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:14.755 fio-3.35 00:10:14.755 Starting 1 thread 00:10:18.038 00:10:18.038 test: (groupid=0, jobs=1): err= 0: pid=66246: Wed Oct 9 07:50:19 2024 00:10:18.038 read: IOPS=15.6k, BW=60.8MiB/s (63.7MB/s)(122MiB/2001msec) 00:10:18.038 slat (nsec): min=4569, max=69720, avg=6039.24, stdev=2056.06 00:10:18.038 clat (usec): min=286, max=9770, avg=4097.64, stdev=668.81 00:10:18.038 lat (usec): min=295, max=9775, avg=4103.68, stdev=669.45 00:10:18.038 clat percentiles (usec): 00:10:18.038 | 1.00th=[ 2671], 5.00th=[ 3261], 10.00th=[ 3556], 20.00th=[ 3752], 00:10:18.038 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4080], 00:10:18.038 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5145], 00:10:18.038 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[ 9110], 99.95th=[ 9372], 00:10:18.038 | 99.99th=[ 9634] 00:10:18.038 bw ( KiB/s): min=56776, max=65384, per=99.65%, avg=62002.67, stdev=4591.12, samples=3 00:10:18.038 iops : min=14194, max=16346, avg=15500.67, stdev=1147.78, samples=3 00:10:18.038 write: IOPS=15.6k, BW=60.8MiB/s (63.7MB/s)(122MiB/2001msec); 0 zone resets 00:10:18.038 slat (nsec): min=4644, max=66864, avg=6157.08, stdev=2106.21 00:10:18.038 clat (usec): min=436, max=9868, avg=4100.49, stdev=677.69 00:10:18.038 lat (usec): min=442, max=9876, avg=4106.65, stdev=678.37 00:10:18.038 clat percentiles (usec): 00:10:18.038 | 1.00th=[ 2638], 5.00th=[ 3261], 10.00th=[ 3556], 20.00th=[ 3752], 00:10:18.038 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4080], 00:10:18.038 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5145], 00:10:18.038 | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 9110], 99.95th=[ 9372], 00:10:18.038 | 99.99th=[ 9503] 00:10:18.038 bw ( KiB/s): min=57104, max=64320, per=98.91%, avg=61570.67, stdev=3902.51, samples=3 00:10:18.038 iops : min=14276, max=16080, avg=15392.67, stdev=975.63, samples=3 00:10:18.038 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:10:18.038 lat (msec) : 2=0.23%, 4=50.67%, 10=49.06% 00:10:18.038 cpu : usr=98.95%, sys=0.05%, ctx=3, majf=0, minf=608 00:10:18.038 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.038 issued rwts: total=31125,31139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.038 00:10:18.038 Run status group 0 (all jobs): 00:10:18.038 READ: bw=60.8MiB/s (63.7MB/s), 60.8MiB/s-60.8MiB/s (63.7MB/s-63.7MB/s), io=122MiB (127MB), run=2001-2001msec 00:10:18.038 WRITE: bw=60.8MiB/s (63.7MB/s), 60.8MiB/s-60.8MiB/s (63.7MB/s-63.7MB/s), io=122MiB (128MB), run=2001-2001msec 00:10:18.296 ----------------------------------------------------- 00:10:18.296 Suppressions used: 00:10:18.296 count bytes template 00:10:18.296 1 32 /usr/src/fio/parse.c 00:10:18.296 1 8 libtcmalloc_minimal.so 00:10:18.296 ----------------------------------------------------- 00:10:18.296 00:10:18.296 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.296 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:18.296 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:18.296 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.555 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:18.555 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.814 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:18.814 07:50:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:18.814 07:50:20 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:19.073 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:19.073 fio-3.35 00:10:19.073 Starting 1 thread 00:10:23.285 00:10:23.285 test: (groupid=0, jobs=1): err= 0: pid=66307: Wed Oct 9 07:50:24 2024 00:10:23.285 read: IOPS=13.8k, BW=53.9MiB/s (56.6MB/s)(108MiB/2001msec) 00:10:23.285 slat (usec): min=4, max=327, avg= 6.93, stdev= 3.41 00:10:23.285 clat (usec): min=580, max=10348, avg=4617.28, stdev=694.05 00:10:23.285 lat (usec): min=587, max=10354, avg=4624.22, stdev=694.88 00:10:23.285 clat percentiles (usec): 00:10:23.285 | 1.00th=[ 2835], 5.00th=[ 3687], 10.00th=[ 3818], 20.00th=[ 4015], 00:10:23.285 | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:10:23.285 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5538], 00:10:23.285 | 99.00th=[ 7046], 99.50th=[ 7832], 99.90th=[ 9372], 99.95th=[ 9634], 00:10:23.285 | 99.99th=[10028] 00:10:23.285 bw ( KiB/s): min=52840, max=56440, per=98.20%, avg=54240.00, stdev=1928.73, samples=3 00:10:23.285 iops : min=13210, max=14110, avg=13560.00, stdev=482.18, samples=3 00:10:23.285 write: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2001msec); 0 zone resets 00:10:23.285 slat (usec): min=4, max=405, avg= 7.09, stdev= 4.41 00:10:23.285 clat (usec): min=279, max=10279, avg=4619.43, stdev=693.68 00:10:23.285 lat (usec): min=287, max=10286, avg=4626.52, stdev=694.46 00:10:23.285 clat percentiles (usec): 00:10:23.285 | 1.00th=[ 2835], 5.00th=[ 3687], 10.00th=[ 3818], 20.00th=[ 4047], 00:10:23.285 | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:10:23.285 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5538], 00:10:23.285 | 99.00th=[ 7046], 99.50th=[ 7701], 99.90th=[ 9110], 99.95th=[ 9634], 00:10:23.285 | 99.99th=[10028] 00:10:23.285 bw ( KiB/s): min=52936, max=56288, per=98.45%, avg=54328.00, stdev=1746.70, samples=3 00:10:23.285 iops : min=13234, max=14072, avg=13582.00, stdev=436.67, samples=3 00:10:23.285 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:23.285 lat (msec) : 2=0.16%, 4=18.55%, 10=81.26%, 20=0.01% 00:10:23.285 cpu : usr=98.15%, sys=0.35%, ctx=45, majf=0, minf=606 00:10:23.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:23.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.285 issued rwts: total=27632,27605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.285 00:10:23.285 Run status group 0 (all jobs): 00:10:23.285 READ: bw=53.9MiB/s (56.6MB/s), 53.9MiB/s-53.9MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2001-2001msec 00:10:23.285 WRITE: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2001-2001msec 00:10:23.285 ----------------------------------------------------- 00:10:23.285 Suppressions used: 00:10:23.285 count bytes template 00:10:23.285 1 32 /usr/src/fio/parse.c 00:10:23.285 1 8 libtcmalloc_minimal.so 00:10:23.285 ----------------------------------------------------- 00:10:23.285 00:10:23.285 07:50:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:23.286 07:50:24 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:23.286 00:10:23.286 real 0m16.873s 00:10:23.286 user 0m13.507s 00:10:23.286 sys 0m2.030s 00:10:23.286 07:50:24 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.286 07:50:24 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:23.286 ************************************ 00:10:23.286 END TEST nvme_fio 00:10:23.286 ************************************ 00:10:23.286 00:10:23.286 real 1m31.685s 00:10:23.286 user 3m46.817s 00:10:23.286 sys 0m14.606s 00:10:23.286 07:50:24 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.286 07:50:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.286 ************************************ 00:10:23.286 END TEST nvme 00:10:23.286 ************************************ 00:10:23.286 07:50:24 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:23.286 07:50:24 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:23.286 07:50:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.286 07:50:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.286 07:50:24 -- common/autotest_common.sh@10 -- # set +x 00:10:23.286 ************************************ 00:10:23.286 START TEST nvme_scc 00:10:23.286 ************************************ 00:10:23.286 07:50:24 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:23.286 * Looking for test storage... 00:10:23.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:23.286 07:50:24 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:23.286 07:50:24 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:23.286 07:50:24 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.286 07:50:25 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:23.286 07:50:25 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.286 07:50:25 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.286 --rc genhtml_branch_coverage=1 00:10:23.286 --rc genhtml_function_coverage=1 00:10:23.286 --rc genhtml_legend=1 00:10:23.286 --rc geninfo_all_blocks=1 00:10:23.286 --rc geninfo_unexecuted_blocks=1 00:10:23.286 00:10:23.286 ' 00:10:23.286 07:50:25 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.286 --rc genhtml_branch_coverage=1 00:10:23.286 --rc genhtml_function_coverage=1 00:10:23.286 --rc genhtml_legend=1 00:10:23.286 --rc geninfo_all_blocks=1 00:10:23.286 --rc geninfo_unexecuted_blocks=1 00:10:23.286 00:10:23.286 ' 00:10:23.286 07:50:25 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.286 --rc genhtml_branch_coverage=1 00:10:23.286 --rc genhtml_function_coverage=1 00:10:23.286 --rc genhtml_legend=1 00:10:23.286 --rc geninfo_all_blocks=1 00:10:23.286 --rc geninfo_unexecuted_blocks=1 00:10:23.286 00:10:23.286 ' 00:10:23.286 07:50:25 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.286 --rc genhtml_branch_coverage=1 00:10:23.286 --rc genhtml_function_coverage=1 00:10:23.286 --rc genhtml_legend=1 00:10:23.286 --rc geninfo_all_blocks=1 00:10:23.286 --rc geninfo_unexecuted_blocks=1 00:10:23.286 00:10:23.286 ' 00:10:23.286 07:50:25 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.286 07:50:25 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.286 07:50:25 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.286 07:50:25 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.286 07:50:25 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.286 07:50:25 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:23.286 07:50:25 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:23.286 07:50:25 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:23.286 07:50:25 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.286 07:50:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:23.286 07:50:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:23.286 07:50:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:23.286 07:50:25 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:23.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.803 Waiting for block devices as requested 00:10:23.803 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.803 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.066 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.346 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:29.346 07:50:30 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:29.346 07:50:30 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:29.346 07:50:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.346 07:50:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:29.346 07:50:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:29.347 07:50:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.347 07:50:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:29.347 07:50:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.347 07:50:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:29.347 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.348 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.349 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.350 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:29.351 07:50:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.351 07:50:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:29.351 07:50:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.351 07:50:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.351 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:29.352 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.353 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:29.354 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:29.356 07:50:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.356 07:50:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:29.356 07:50:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.356 07:50:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:29.357 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:29.358 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.619 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.620 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.621 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:29.622 07:50:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.622 07:50:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:29.622 07:50:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.622 07:50:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:29.622 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.623 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:29.624 07:50:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:29.624 07:50:31 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:29.624 07:50:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:29.624 07:50:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:29.624 07:50:31 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:30.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.757 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.757 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.757 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.757 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.757 07:50:32 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:30.757 07:50:32 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:30.757 07:50:32 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.757 07:50:32 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:31.015 ************************************ 00:10:31.015 START TEST nvme_simple_copy 00:10:31.015 ************************************ 00:10:31.015 07:50:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:31.273 Initializing NVMe Controllers 00:10:31.273 Attaching to 0000:00:10.0 00:10:31.273 Controller supports SCC. Attached to 0000:00:10.0 00:10:31.273 Namespace ID: 1 size: 6GB 00:10:31.273 Initialization complete. 00:10:31.273 00:10:31.273 Controller QEMU NVMe Ctrl (12340 ) 00:10:31.273 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:31.273 Namespace Block Size:4096 00:10:31.273 Writing LBAs 0 to 63 with Random Data 00:10:31.273 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:31.273 LBAs matching Written Data: 64 00:10:31.273 00:10:31.273 ************************************ 00:10:31.273 END TEST nvme_simple_copy 00:10:31.273 ************************************ 00:10:31.273 real 0m0.350s 00:10:31.273 user 0m0.165s 00:10:31.273 sys 0m0.083s 00:10:31.273 07:50:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.273 07:50:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:31.273 ************************************ 00:10:31.273 END TEST nvme_scc 00:10:31.273 ************************************ 00:10:31.273 00:10:31.273 real 0m8.283s 00:10:31.273 user 0m1.492s 00:10:31.273 sys 0m1.655s 00:10:31.273 07:50:33 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.273 07:50:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:31.273 07:50:33 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:31.273 07:50:33 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:31.273 07:50:33 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:31.273 07:50:33 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:31.273 07:50:33 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:31.273 07:50:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:31.273 07:50:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.273 07:50:33 -- common/autotest_common.sh@10 -- # set +x 00:10:31.273 ************************************ 00:10:31.273 START TEST nvme_fdp 00:10:31.273 ************************************ 00:10:31.273 07:50:33 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:31.273 * Looking for test storage... 00:10:31.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:31.273 07:50:33 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:31.531 07:50:33 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:31.531 07:50:33 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:31.531 07:50:33 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:31.531 07:50:33 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.531 07:50:33 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.531 07:50:33 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.531 07:50:33 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.531 07:50:33 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.531 07:50:33 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:31.532 07:50:33 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.532 07:50:33 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:31.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.532 --rc genhtml_branch_coverage=1 00:10:31.532 --rc genhtml_function_coverage=1 00:10:31.532 --rc genhtml_legend=1 00:10:31.532 --rc geninfo_all_blocks=1 00:10:31.532 --rc geninfo_unexecuted_blocks=1 00:10:31.532 00:10:31.532 ' 00:10:31.532 07:50:33 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:31.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.532 --rc genhtml_branch_coverage=1 00:10:31.532 --rc genhtml_function_coverage=1 00:10:31.532 --rc genhtml_legend=1 00:10:31.532 --rc geninfo_all_blocks=1 00:10:31.532 --rc geninfo_unexecuted_blocks=1 00:10:31.532 00:10:31.532 ' 00:10:31.532 07:50:33 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:31.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.532 --rc genhtml_branch_coverage=1 00:10:31.532 --rc genhtml_function_coverage=1 00:10:31.532 --rc genhtml_legend=1 00:10:31.532 --rc geninfo_all_blocks=1 00:10:31.532 --rc geninfo_unexecuted_blocks=1 00:10:31.532 00:10:31.532 ' 00:10:31.532 07:50:33 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:31.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.532 --rc genhtml_branch_coverage=1 00:10:31.532 --rc genhtml_function_coverage=1 00:10:31.532 --rc genhtml_legend=1 00:10:31.532 --rc geninfo_all_blocks=1 00:10:31.532 --rc geninfo_unexecuted_blocks=1 00:10:31.532 00:10:31.532 ' 00:10:31.532 07:50:33 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.532 07:50:33 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.532 07:50:33 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.532 07:50:33 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.532 07:50:33 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.532 07:50:33 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:31.532 07:50:33 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:31.532 07:50:33 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:31.532 07:50:33 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:31.532 07:50:33 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:31.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.048 Waiting for block devices as requested 00:10:32.048 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.048 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.306 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.306 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.624 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:37.624 07:50:39 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:37.624 07:50:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.624 07:50:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:37.624 07:50:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.624 07:50:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.624 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.625 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:37.626 07:50:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.627 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:37.628 07:50:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.628 07:50:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:37.628 07:50:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.628 07:50:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:37.628 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.629 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.630 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:37.631 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:37.632 07:50:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:37.633 07:50:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.633 07:50:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:37.633 07:50:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.633 07:50:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.633 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:37.634 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:37.635 07:50:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.636 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.637 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.638 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.639 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.640 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.900 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.900 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.900 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:37.901 07:50:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.901 07:50:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:37.901 07:50:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.901 07:50:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:37.901 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.902 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.903 07:50:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:37.904 07:50:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:37.904 07:50:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:37.904 07:50:39 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:37.904 07:50:39 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:38.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.037 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.037 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.037 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.037 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.037 07:50:40 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:39.037 07:50:40 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.037 07:50:40 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.037 07:50:40 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:39.037 ************************************ 00:10:39.037 START TEST nvme_flexible_data_placement 00:10:39.037 ************************************ 00:10:39.037 07:50:40 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:39.296 Initializing NVMe Controllers 00:10:39.296 Attaching to 0000:00:13.0 00:10:39.296 Controller supports FDP Attached to 0000:00:13.0 00:10:39.296 Namespace ID: 1 Endurance Group ID: 1 00:10:39.296 Initialization complete. 00:10:39.296 00:10:39.296 ================================== 00:10:39.296 == FDP tests for Namespace: #01 == 00:10:39.296 ================================== 00:10:39.296 00:10:39.296 Get Feature: FDP: 00:10:39.296 ================= 00:10:39.296 Enabled: Yes 00:10:39.296 FDP configuration Index: 0 00:10:39.296 00:10:39.296 FDP configurations log page 00:10:39.296 =========================== 00:10:39.296 Number of FDP configurations: 1 00:10:39.296 Version: 0 00:10:39.296 Size: 112 00:10:39.296 FDP Configuration Descriptor: 0 00:10:39.296 Descriptor Size: 96 00:10:39.296 Reclaim Group Identifier format: 2 00:10:39.296 FDP Volatile Write Cache: Not Present 00:10:39.296 FDP Configuration: Valid 00:10:39.296 Vendor Specific Size: 0 00:10:39.296 Number of Reclaim Groups: 2 00:10:39.296 Number of Recalim Unit Handles: 8 00:10:39.296 Max Placement Identifiers: 128 00:10:39.296 Number of Namespaces Suppprted: 256 00:10:39.296 Reclaim unit Nominal Size: 6000000 bytes 00:10:39.296 Estimated Reclaim Unit Time Limit: Not Reported 00:10:39.296 RUH Desc #000: RUH Type: Initially Isolated 00:10:39.296 RUH Desc #001: RUH Type: Initially Isolated 00:10:39.296 RUH Desc #002: RUH Type: Initially Isolated 00:10:39.296 RUH Desc #003: RUH Type: Initially Isolated 00:10:39.296 RUH Desc #004: RUH Type: Initially Isolated 00:10:39.296 RUH Desc #005: RUH Type: Initially Isolated 00:10:39.296 RUH Desc #006: RUH Type: Initially Isolated 00:10:39.296 RUH Desc #007: RUH Type: Initially Isolated 00:10:39.296 00:10:39.296 FDP reclaim unit handle usage log page 00:10:39.296 ====================================== 00:10:39.296 Number of Reclaim Unit Handles: 8 00:10:39.296 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:39.296 RUH Usage Desc #001: RUH Attributes: Unused 00:10:39.296 RUH Usage Desc #002: RUH Attributes: Unused 00:10:39.296 RUH Usage Desc #003: RUH Attributes: Unused 00:10:39.296 RUH Usage Desc #004: RUH Attributes: Unused 00:10:39.296 RUH Usage Desc #005: RUH Attributes: Unused 00:10:39.296 RUH Usage Desc #006: RUH Attributes: Unused 00:10:39.296 RUH Usage Desc #007: RUH Attributes: Unused 00:10:39.296 00:10:39.296 FDP statistics log page 00:10:39.296 ======================= 00:10:39.296 Host bytes with metadata written: 763232256 00:10:39.296 Media bytes with metadata written: 763400192 00:10:39.296 Media bytes erased: 0 00:10:39.296 00:10:39.296 FDP Reclaim unit handle status 00:10:39.296 ============================== 00:10:39.296 Number of RUHS descriptors: 2 00:10:39.296 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002820 00:10:39.296 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:39.296 00:10:39.296 FDP write on placement id: 0 success 00:10:39.296 00:10:39.296 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:39.296 00:10:39.296 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:39.296 00:10:39.296 Get Feature: FDP Events for Placement handle: #0 00:10:39.296 ======================== 00:10:39.296 Number of FDP Events: 6 00:10:39.296 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:39.296 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:39.296 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:39.296 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:39.296 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:39.296 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:39.296 00:10:39.296 FDP events log page 00:10:39.296 =================== 00:10:39.296 Number of FDP events: 1 00:10:39.296 FDP Event #0: 00:10:39.296 Event Type: RU Not Written to Capacity 00:10:39.296 Placement Identifier: Valid 00:10:39.296 NSID: Valid 00:10:39.296 Location: Valid 00:10:39.296 Placement Identifier: 0 00:10:39.296 Event Timestamp: 7 00:10:39.296 Namespace Identifier: 1 00:10:39.296 Reclaim Group Identifier: 0 00:10:39.296 Reclaim Unit Handle Identifier: 0 00:10:39.296 00:10:39.296 FDP test passed 00:10:39.296 00:10:39.296 real 0m0.295s 00:10:39.296 user 0m0.098s 00:10:39.296 sys 0m0.095s 00:10:39.296 07:50:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.296 07:50:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 ************************************ 00:10:39.296 END TEST nvme_flexible_data_placement 00:10:39.296 ************************************ 00:10:39.296 00:10:39.296 real 0m8.037s 00:10:39.296 user 0m1.387s 00:10:39.296 sys 0m1.629s 00:10:39.296 07:50:41 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.296 07:50:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 ************************************ 00:10:39.296 END TEST nvme_fdp 00:10:39.296 ************************************ 00:10:39.296 07:50:41 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:39.296 07:50:41 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:39.296 07:50:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:39.296 07:50:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.296 07:50:41 -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 ************************************ 00:10:39.296 START TEST nvme_rpc 00:10:39.296 ************************************ 00:10:39.296 07:50:41 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:39.556 * Looking for test storage... 00:10:39.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.556 07:50:41 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:39.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.556 --rc genhtml_branch_coverage=1 00:10:39.556 --rc genhtml_function_coverage=1 00:10:39.556 --rc genhtml_legend=1 00:10:39.556 --rc geninfo_all_blocks=1 00:10:39.556 --rc geninfo_unexecuted_blocks=1 00:10:39.556 00:10:39.556 ' 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:39.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.556 --rc genhtml_branch_coverage=1 00:10:39.556 --rc genhtml_function_coverage=1 00:10:39.556 --rc genhtml_legend=1 00:10:39.556 --rc geninfo_all_blocks=1 00:10:39.556 --rc geninfo_unexecuted_blocks=1 00:10:39.556 00:10:39.556 ' 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:39.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.556 --rc genhtml_branch_coverage=1 00:10:39.556 --rc genhtml_function_coverage=1 00:10:39.556 --rc genhtml_legend=1 00:10:39.556 --rc geninfo_all_blocks=1 00:10:39.556 --rc geninfo_unexecuted_blocks=1 00:10:39.556 00:10:39.556 ' 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:39.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.556 --rc genhtml_branch_coverage=1 00:10:39.556 --rc genhtml_function_coverage=1 00:10:39.556 --rc genhtml_legend=1 00:10:39.556 --rc geninfo_all_blocks=1 00:10:39.556 --rc geninfo_unexecuted_blocks=1 00:10:39.556 00:10:39.556 ' 00:10:39.556 07:50:41 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:39.556 07:50:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:39.556 07:50:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:39.556 07:50:41 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67674 00:10:39.556 07:50:41 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:39.556 07:50:41 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67674 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67674 ']' 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.556 07:50:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.556 07:50:41 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:39.815 [2024-10-09 07:50:41.725195] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:10:39.815 [2024-10-09 07:50:41.725365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67674 ] 00:10:40.073 [2024-10-09 07:50:41.893456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:40.332 [2024-10-09 07:50:42.194467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.332 [2024-10-09 07:50:42.194480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.266 07:50:42 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.266 07:50:42 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:41.266 07:50:42 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:41.524 Nvme0n1 00:10:41.524 07:50:43 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:41.524 07:50:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:41.782 request: 00:10:41.782 { 00:10:41.782 "bdev_name": "Nvme0n1", 00:10:41.782 "filename": "non_existing_file", 00:10:41.782 "method": "bdev_nvme_apply_firmware", 00:10:41.782 "req_id": 1 00:10:41.782 } 00:10:41.782 Got JSON-RPC error response 00:10:41.782 response: 00:10:41.782 { 00:10:41.782 "code": -32603, 00:10:41.782 "message": "open file failed." 00:10:41.782 } 00:10:41.782 07:50:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:41.782 07:50:43 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:41.782 07:50:43 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:42.040 07:50:43 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:42.040 07:50:43 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67674 00:10:42.040 07:50:43 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67674 ']' 00:10:42.040 07:50:43 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67674 00:10:42.040 07:50:43 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:42.040 07:50:43 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.040 07:50:43 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67674 00:10:42.040 07:50:44 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.040 killing process with pid 67674 00:10:42.040 07:50:44 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.040 07:50:44 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67674' 00:10:42.040 07:50:44 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67674 00:10:42.040 07:50:44 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67674 00:10:44.570 00:10:44.570 real 0m4.876s 00:10:44.570 user 0m9.190s 00:10:44.570 sys 0m0.691s 00:10:44.570 07:50:46 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.570 07:50:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 ************************************ 00:10:44.570 END TEST nvme_rpc 00:10:44.570 ************************************ 00:10:44.570 07:50:46 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:44.570 07:50:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:44.570 07:50:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.570 07:50:46 -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 ************************************ 00:10:44.570 START TEST nvme_rpc_timeouts 00:10:44.570 ************************************ 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:44.570 * Looking for test storage... 00:10:44.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.570 07:50:46 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:44.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.570 --rc genhtml_branch_coverage=1 00:10:44.570 --rc genhtml_function_coverage=1 00:10:44.570 --rc genhtml_legend=1 00:10:44.570 --rc geninfo_all_blocks=1 00:10:44.570 --rc geninfo_unexecuted_blocks=1 00:10:44.570 00:10:44.570 ' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:44.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.570 --rc genhtml_branch_coverage=1 00:10:44.570 --rc genhtml_function_coverage=1 00:10:44.570 --rc genhtml_legend=1 00:10:44.570 --rc geninfo_all_blocks=1 00:10:44.570 --rc geninfo_unexecuted_blocks=1 00:10:44.570 00:10:44.570 ' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:44.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.570 --rc genhtml_branch_coverage=1 00:10:44.570 --rc genhtml_function_coverage=1 00:10:44.570 --rc genhtml_legend=1 00:10:44.570 --rc geninfo_all_blocks=1 00:10:44.570 --rc geninfo_unexecuted_blocks=1 00:10:44.570 00:10:44.570 ' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:44.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.570 --rc genhtml_branch_coverage=1 00:10:44.570 --rc genhtml_function_coverage=1 00:10:44.570 --rc genhtml_legend=1 00:10:44.570 --rc geninfo_all_blocks=1 00:10:44.570 --rc geninfo_unexecuted_blocks=1 00:10:44.570 00:10:44.570 ' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.570 07:50:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67756 00:10:44.570 07:50:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67756 00:10:44.570 07:50:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67788 00:10:44.570 07:50:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:44.570 07:50:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:44.570 07:50:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67788 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67788 ']' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.570 07:50:46 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 [2024-10-09 07:50:46.545355] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:10:44.570 [2024-10-09 07:50:46.545502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67788 ] 00:10:44.865 [2024-10-09 07:50:46.706549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.124 [2024-10-09 07:50:46.895415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.124 [2024-10-09 07:50:46.895435] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.690 Checking default timeout settings: 00:10:45.691 07:50:47 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.691 07:50:47 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:45.691 07:50:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:45.691 07:50:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:46.257 Making settings changes with rpc: 00:10:46.257 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:46.257 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:46.514 Check default vs. modified settings: 00:10:46.514 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:46.514 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67756 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67756 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.080 Setting action_on_timeout is changed as expected. 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67756 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67756 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.080 Setting timeout_us is changed as expected. 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:47.080 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67756 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67756 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.081 Setting timeout_admin_us is changed as expected. 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67756 /tmp/settings_modified_67756 00:10:47.081 07:50:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67788 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67788 ']' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67788 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67788 00:10:47.081 killing process with pid 67788 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67788' 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67788 00:10:47.081 07:50:48 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67788 00:10:49.634 RPC TIMEOUT SETTING TEST PASSED. 00:10:49.634 07:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:49.634 00:10:49.634 real 0m4.968s 00:10:49.634 user 0m9.707s 00:10:49.634 sys 0m0.627s 00:10:49.634 ************************************ 00:10:49.634 END TEST nvme_rpc_timeouts 00:10:49.634 ************************************ 00:10:49.634 07:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.634 07:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:49.634 07:50:51 -- spdk/autotest.sh@239 -- # uname -s 00:10:49.634 07:50:51 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:49.634 07:50:51 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:49.634 07:50:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:49.634 07:50:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.634 07:50:51 -- common/autotest_common.sh@10 -- # set +x 00:10:49.634 ************************************ 00:10:49.634 START TEST sw_hotplug 00:10:49.634 ************************************ 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:49.634 * Looking for test storage... 00:10:49.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.634 07:50:51 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:49.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.634 --rc genhtml_branch_coverage=1 00:10:49.634 --rc genhtml_function_coverage=1 00:10:49.634 --rc genhtml_legend=1 00:10:49.634 --rc geninfo_all_blocks=1 00:10:49.634 --rc geninfo_unexecuted_blocks=1 00:10:49.634 00:10:49.634 ' 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:49.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.634 --rc genhtml_branch_coverage=1 00:10:49.634 --rc genhtml_function_coverage=1 00:10:49.634 --rc genhtml_legend=1 00:10:49.634 --rc geninfo_all_blocks=1 00:10:49.634 --rc geninfo_unexecuted_blocks=1 00:10:49.634 00:10:49.634 ' 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:49.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.634 --rc genhtml_branch_coverage=1 00:10:49.634 --rc genhtml_function_coverage=1 00:10:49.634 --rc genhtml_legend=1 00:10:49.634 --rc geninfo_all_blocks=1 00:10:49.634 --rc geninfo_unexecuted_blocks=1 00:10:49.634 00:10:49.634 ' 00:10:49.634 07:50:51 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:49.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.634 --rc genhtml_branch_coverage=1 00:10:49.634 --rc genhtml_function_coverage=1 00:10:49.634 --rc genhtml_legend=1 00:10:49.634 --rc geninfo_all_blocks=1 00:10:49.634 --rc geninfo_unexecuted_blocks=1 00:10:49.634 00:10:49.634 ' 00:10:49.634 07:50:51 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.150 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.150 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.150 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.150 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.150 07:50:51 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:50.150 07:50:51 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:50.150 07:50:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:50.150 07:50:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:50.150 07:50:51 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:50.150 07:50:51 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:50.150 07:50:51 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:50.151 07:50:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:50.151 07:50:52 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:50.151 07:50:52 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:50.151 07:50:52 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:50.151 07:50:52 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:50.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.667 Waiting for block devices as requested 00:10:50.667 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.667 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.925 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.925 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.190 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:56.190 07:50:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:56.190 07:50:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:56.456 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:56.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.456 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:56.715 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:56.973 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.973 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.973 07:50:58 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:56.973 07:50:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68664 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:57.232 07:50:59 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:57.232 07:50:59 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:57.232 07:50:59 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:57.232 07:50:59 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:57.232 07:50:59 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:57.232 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:57.492 Initializing NVMe Controllers 00:10:57.492 Attaching to 0000:00:10.0 00:10:57.492 Attaching to 0000:00:11.0 00:10:57.492 Attached to 0000:00:10.0 00:10:57.492 Attached to 0000:00:11.0 00:10:57.492 Initialization complete. Starting I/O... 00:10:57.492 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:57.492 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:57.492 00:10:58.426 QEMU NVMe Ctrl (12340 ): 1072 I/Os completed (+1072) 00:10:58.426 QEMU NVMe Ctrl (12341 ): 1226 I/Os completed (+1226) 00:10:58.426 00:10:59.360 QEMU NVMe Ctrl (12340 ): 2252 I/Os completed (+1180) 00:10:59.360 QEMU NVMe Ctrl (12341 ): 2608 I/Os completed (+1382) 00:10:59.360 00:11:00.737 QEMU NVMe Ctrl (12340 ): 3940 I/Os completed (+1688) 00:11:00.737 QEMU NVMe Ctrl (12341 ): 4425 I/Os completed (+1817) 00:11:00.737 00:11:01.304 QEMU NVMe Ctrl (12340 ): 5648 I/Os completed (+1708) 00:11:01.304 QEMU NVMe Ctrl (12341 ): 6249 I/Os completed (+1824) 00:11:01.304 00:11:02.701 QEMU NVMe Ctrl (12340 ): 7100 I/Os completed (+1452) 00:11:02.701 QEMU NVMe Ctrl (12341 ): 8098 I/Os completed (+1849) 00:11:02.701 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.268 [2024-10-09 07:51:05.080757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:03.268 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:03.268 [2024-10-09 07:51:05.083317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.083428] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.083466] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.083500] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:03.268 [2024-10-09 07:51:05.087251] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.087356] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.087404] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.087433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:03.268 EAL: Scan for (pci) bus failed. 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.268 [2024-10-09 07:51:05.114982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:03.268 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:03.268 [2024-10-09 07:51:05.116778] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.116843] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.116885] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.116921] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:03.268 [2024-10-09 07:51:05.119541] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.119596] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.119625] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 [2024-10-09 07:51:05.119648] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:03.268 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:03.268 EAL: Scan for (pci) bus failed. 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.268 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.526 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.526 00:11:03.526 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.526 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.526 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.526 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.526 Attaching to 0000:00:10.0 00:11:03.526 Attached to 0000:00:10.0 00:11:03.527 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:03.527 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.527 07:51:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.527 Attaching to 0000:00:11.0 00:11:03.527 Attached to 0000:00:11.0 00:11:04.463 QEMU NVMe Ctrl (12340 ): 1521 I/Os completed (+1521) 00:11:04.463 QEMU NVMe Ctrl (12341 ): 1588 I/Os completed (+1588) 00:11:04.463 00:11:05.434 QEMU NVMe Ctrl (12340 ): 3052 I/Os completed (+1531) 00:11:05.434 QEMU NVMe Ctrl (12341 ): 3395 I/Os completed (+1807) 00:11:05.434 00:11:06.368 QEMU NVMe Ctrl (12340 ): 4438 I/Os completed (+1386) 00:11:06.368 QEMU NVMe Ctrl (12341 ): 5069 I/Os completed (+1674) 00:11:06.368 00:11:07.304 QEMU NVMe Ctrl (12340 ): 6370 I/Os completed (+1932) 00:11:07.304 QEMU NVMe Ctrl (12341 ): 7368 I/Os completed (+2299) 00:11:07.304 00:11:08.704 QEMU NVMe Ctrl (12340 ): 7957 I/Os completed (+1587) 00:11:08.704 QEMU NVMe Ctrl (12341 ): 9122 I/Os completed (+1754) 00:11:08.704 00:11:09.639 QEMU NVMe Ctrl (12340 ): 9545 I/Os completed (+1588) 00:11:09.639 QEMU NVMe Ctrl (12341 ): 10922 I/Os completed (+1800) 00:11:09.639 00:11:10.573 QEMU NVMe Ctrl (12340 ): 11294 I/Os completed (+1749) 00:11:10.573 QEMU NVMe Ctrl (12341 ): 12877 I/Os completed (+1955) 00:11:10.573 00:11:11.507 QEMU NVMe Ctrl (12340 ): 12983 I/Os completed (+1689) 00:11:11.507 QEMU NVMe Ctrl (12341 ): 14842 I/Os completed (+1965) 00:11:11.507 00:11:12.441 QEMU NVMe Ctrl (12340 ): 14688 I/Os completed (+1705) 00:11:12.441 QEMU NVMe Ctrl (12341 ): 16798 I/Os completed (+1956) 00:11:12.441 00:11:13.375 QEMU NVMe Ctrl (12340 ): 16304 I/Os completed (+1616) 00:11:13.375 QEMU NVMe Ctrl (12341 ): 18559 I/Os completed (+1761) 00:11:13.375 00:11:14.309 QEMU NVMe Ctrl (12340 ): 17949 I/Os completed (+1645) 00:11:14.309 QEMU NVMe Ctrl (12341 ): 20281 I/Os completed (+1722) 00:11:14.309 00:11:15.683 QEMU NVMe Ctrl (12340 ): 19673 I/Os completed (+1724) 00:11:15.683 QEMU NVMe Ctrl (12341 ): 22101 I/Os completed (+1820) 00:11:15.683 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.683 [2024-10-09 07:51:17.409549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:15.683 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:15.683 [2024-10-09 07:51:17.412481] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.412580] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.412620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.412658] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:15.683 [2024-10-09 07:51:17.416881] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.416972] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.417013] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.417046] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.683 [2024-10-09 07:51:17.447159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:15.683 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:15.683 [2024-10-09 07:51:17.449800] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.449885] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.449931] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.449966] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:15.683 [2024-10-09 07:51:17.453728] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.453804] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.453842] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 [2024-10-09 07:51:17.453874] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.683 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.683 Attaching to 0000:00:10.0 00:11:15.683 Attached to 0000:00:10.0 00:11:15.941 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.941 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.941 07:51:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.941 Attaching to 0000:00:11.0 00:11:15.941 Attached to 0000:00:11.0 00:11:16.507 QEMU NVMe Ctrl (12340 ): 1175 I/Os completed (+1175) 00:11:16.507 QEMU NVMe Ctrl (12341 ): 1441 I/Os completed (+1441) 00:11:16.507 00:11:17.440 QEMU NVMe Ctrl (12340 ): 4879 I/Os completed (+3704) 00:11:17.440 QEMU NVMe Ctrl (12341 ): 5866 I/Os completed (+4425) 00:11:17.440 00:11:18.373 QEMU NVMe Ctrl (12340 ): 9572 I/Os completed (+4693) 00:11:18.373 QEMU NVMe Ctrl (12341 ): 12273 I/Os completed (+6407) 00:11:18.373 00:11:19.307 QEMU NVMe Ctrl (12340 ): 11601 I/Os completed (+2029) 00:11:19.307 QEMU NVMe Ctrl (12341 ): 15175 I/Os completed (+2902) 00:11:19.307 00:11:20.681 QEMU NVMe Ctrl (12340 ): 15047 I/Os completed (+3446) 00:11:20.681 QEMU NVMe Ctrl (12341 ): 19389 I/Os completed (+4214) 00:11:20.681 00:11:21.615 QEMU NVMe Ctrl (12340 ): 19289 I/Os completed (+4242) 00:11:21.615 QEMU NVMe Ctrl (12341 ): 23910 I/Os completed (+4521) 00:11:21.615 00:11:22.550 QEMU NVMe Ctrl (12340 ): 22137 I/Os completed (+2848) 00:11:22.550 QEMU NVMe Ctrl (12341 ): 28069 I/Os completed (+4159) 00:11:22.550 00:11:23.483 QEMU NVMe Ctrl (12340 ): 24624 I/Os completed (+2487) 00:11:23.483 QEMU NVMe Ctrl (12341 ): 31765 I/Os completed (+3696) 00:11:23.483 00:11:24.417 QEMU NVMe Ctrl (12340 ): 28029 I/Os completed (+3405) 00:11:24.417 QEMU NVMe Ctrl (12341 ): 36076 I/Os completed (+4311) 00:11:24.417 00:11:25.351 QEMU NVMe Ctrl (12340 ): 32593 I/Os completed (+4564) 00:11:25.351 QEMU NVMe Ctrl (12341 ): 40507 I/Os completed (+4431) 00:11:25.351 00:11:26.726 QEMU NVMe Ctrl (12340 ): 34919 I/Os completed (+2326) 00:11:26.726 QEMU NVMe Ctrl (12341 ): 43297 I/Os completed (+2790) 00:11:26.726 00:11:27.660 QEMU NVMe Ctrl (12340 ): 36827 I/Os completed (+1908) 00:11:27.660 QEMU NVMe Ctrl (12341 ): 45381 I/Os completed (+2084) 00:11:27.660 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.919 [2024-10-09 07:51:29.736454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:27.919 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:27.919 [2024-10-09 07:51:29.741176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.741272] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.741315] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.741381] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:27.919 [2024-10-09 07:51:29.745834] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.745917] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.745955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.745989] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.919 [2024-10-09 07:51:29.773405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:27.919 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:27.919 [2024-10-09 07:51:29.776238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.776355] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.776405] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.776442] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:27.919 [2024-10-09 07:51:29.780391] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.780467] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.780511] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 [2024-10-09 07:51:29.780545] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.919 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:27.919 EAL: Scan for (pci) bus failed. 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.919 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:28.182 07:51:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:28.182 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.182 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.182 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.182 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:28.182 Attaching to 0000:00:10.0 00:11:28.182 Attached to 0000:00:10.0 00:11:28.182 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:28.182 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.182 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.182 Attaching to 0000:00:11.0 00:11:28.182 Attached to 0000:00:11.0 00:11:28.182 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:28.182 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:28.182 [2024-10-09 07:51:30.111907] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:40.394 07:51:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:40.394 07:51:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.394 07:51:42 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.03 00:11:40.394 07:51:42 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.03 00:11:40.394 07:51:42 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:40.394 07:51:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.03 00:11:40.394 07:51:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.03 2 00:11:40.394 remove_attach_helper took 43.03s to complete (handling 2 nvme drive(s)) 07:51:42 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68664 00:11:46.952 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68664) - No such process 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68664 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69211 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:46.952 07:51:48 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69211 00:11:46.952 07:51:48 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 69211 ']' 00:11:46.952 07:51:48 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.952 07:51:48 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:46.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.952 07:51:48 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.952 07:51:48 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:46.952 07:51:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.952 [2024-10-09 07:51:48.260777] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:11:46.952 [2024-10-09 07:51:48.260990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69211 ] 00:11:46.952 [2024-10-09 07:51:48.442813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.952 [2024-10-09 07:51:48.629966] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:47.518 07:51:49 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:47.518 07:51:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.082 07:51:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.082 07:51:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.082 07:51:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.082 [2024-10-09 07:51:55.508728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:54.082 [2024-10-09 07:51:55.511716] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.511776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.511799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 [2024-10-09 07:51:55.511897] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.511920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.511938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 [2024-10-09 07:51:55.511955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.511972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.511986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 [2024-10-09 07:51:55.512008] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.512022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.512038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:54.082 07:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:54.082 [2024-10-09 07:51:55.908730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:54.082 [2024-10-09 07:51:55.911868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.911933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.911968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 [2024-10-09 07:51:55.911996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.912014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.912030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 [2024-10-09 07:51:55.912047] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.912061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.912077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 [2024-10-09 07:51:55.912092] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.082 [2024-10-09 07:51:55.912108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.082 [2024-10-09 07:51:55.912122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.082 07:51:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.082 07:51:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.082 07:51:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:54.082 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.409 07:51:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.606 07:52:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.606 07:52:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.606 07:52:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.606 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.606 [2024-10-09 07:52:08.508914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:06.606 [2024-10-09 07:52:08.512067] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.606 [2024-10-09 07:52:08.512126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.606 [2024-10-09 07:52:08.512149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.606 [2024-10-09 07:52:08.512180] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.606 [2024-10-09 07:52:08.512196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.606 [2024-10-09 07:52:08.512214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.607 [2024-10-09 07:52:08.512229] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.607 [2024-10-09 07:52:08.512245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.607 [2024-10-09 07:52:08.512260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.607 [2024-10-09 07:52:08.512276] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.607 [2024-10-09 07:52:08.512291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.607 [2024-10-09 07:52:08.512306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.607 07:52:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.607 07:52:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.607 07:52:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:06.607 07:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:07.213 [2024-10-09 07:52:09.008922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:07.213 [2024-10-09 07:52:09.011960] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.213 [2024-10-09 07:52:09.012014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.213 [2024-10-09 07:52:09.012041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.213 [2024-10-09 07:52:09.012068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.213 [2024-10-09 07:52:09.012086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.213 [2024-10-09 07:52:09.012102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.213 [2024-10-09 07:52:09.012120] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.213 [2024-10-09 07:52:09.012134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.213 [2024-10-09 07:52:09.012150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.213 [2024-10-09 07:52:09.012165] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.213 [2024-10-09 07:52:09.012181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.213 [2024-10-09 07:52:09.012195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.213 07:52:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.213 07:52:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.213 07:52:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:07.213 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.471 07:52:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.670 07:52:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.670 07:52:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.670 07:52:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.670 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.670 07:52:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.670 07:52:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.670 [2024-10-09 07:52:21.509097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:19.670 [2024-10-09 07:52:21.512358] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.670 [2024-10-09 07:52:21.512413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.670 [2024-10-09 07:52:21.512436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.670 [2024-10-09 07:52:21.512471] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.671 [2024-10-09 07:52:21.512488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.671 [2024-10-09 07:52:21.512513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.671 [2024-10-09 07:52:21.512529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.671 [2024-10-09 07:52:21.512549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.671 [2024-10-09 07:52:21.512564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.671 [2024-10-09 07:52:21.512584] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.671 [2024-10-09 07:52:21.512598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.671 [2024-10-09 07:52:21.512618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.671 07:52:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.671 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:19.671 07:52:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:19.929 [2024-10-09 07:52:21.909117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:19.929 [2024-10-09 07:52:21.912636] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.929 [2024-10-09 07:52:21.912689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.929 [2024-10-09 07:52:21.912725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.929 [2024-10-09 07:52:21.912752] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.929 [2024-10-09 07:52:21.912770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.929 [2024-10-09 07:52:21.912785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.929 [2024-10-09 07:52:21.912802] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.929 [2024-10-09 07:52:21.912816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.929 [2024-10-09 07:52:21.912837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.929 [2024-10-09 07:52:21.912852] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.929 [2024-10-09 07:52:21.912867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.929 [2024-10-09 07:52:21.912881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.186 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:20.186 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.187 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.187 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.187 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.187 07:52:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.187 07:52:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.187 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.187 07:52:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.187 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:20.187 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:20.465 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.465 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.465 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:20.465 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:20.465 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.466 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.466 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.466 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:20.466 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:20.466 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.466 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.05 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.05 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:12:32.667 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:32.667 07:52:34 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:32.667 07:52:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.228 07:52:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.228 07:52:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.228 [2024-10-09 07:52:40.593184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:39.228 [2024-10-09 07:52:40.595348] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.595420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.595443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 [2024-10-09 07:52:40.595475] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.595492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.595509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 [2024-10-09 07:52:40.595525] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.595541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.595555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 [2024-10-09 07:52:40.595572] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.595585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.595604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 07:52:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:39.228 07:52:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:39.228 [2024-10-09 07:52:40.993188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:39.228 [2024-10-09 07:52:40.997185] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.997242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.997267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 [2024-10-09 07:52:40.997293] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.997311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.997325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 [2024-10-09 07:52:40.997362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.997378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.997394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 [2024-10-09 07:52:40.997410] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.228 [2024-10-09 07:52:40.997426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.228 [2024-10-09 07:52:40.997440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.228 07:52:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.228 07:52:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.228 07:52:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:39.228 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:39.486 07:52:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.793 07:52:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.793 07:52:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.793 07:52:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:51.793 [2024-10-09 07:52:53.594116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.793 [2024-10-09 07:52:53.596238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.793 [2024-10-09 07:52:53.596300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.793 [2024-10-09 07:52:53.596323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.793 [2024-10-09 07:52:53.596371] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.793 [2024-10-09 07:52:53.596389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.793 [2024-10-09 07:52:53.596406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.793 [2024-10-09 07:52:53.596422] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.793 [2024-10-09 07:52:53.596438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.793 [2024-10-09 07:52:53.596452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.793 [2024-10-09 07:52:53.596472] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.793 07:52:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.793 [2024-10-09 07:52:53.596486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.793 [2024-10-09 07:52:53.596503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.793 07:52:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.793 07:52:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:51.793 07:52:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:52.052 [2024-10-09 07:52:53.994119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:52.052 [2024-10-09 07:52:53.996999] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.052 [2024-10-09 07:52:53.997054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.052 [2024-10-09 07:52:53.997079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.052 [2024-10-09 07:52:53.997106] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.052 [2024-10-09 07:52:53.997127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.052 [2024-10-09 07:52:53.997142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.052 [2024-10-09 07:52:53.997160] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.052 [2024-10-09 07:52:53.997175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.052 [2024-10-09 07:52:53.997190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.052 [2024-10-09 07:52:53.997206] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.052 [2024-10-09 07:52:53.997221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.052 [2024-10-09 07:52:53.997237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.052 [2024-10-09 07:52:53.997281] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:52.052 [2024-10-09 07:52:53.997318] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:52.052 [2024-10-09 07:52:53.997352] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:52.052 [2024-10-09 07:52:53.997368] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:52.310 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:52.310 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.311 07:52:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.311 07:52:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.311 07:52:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:52.311 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:52.569 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.832 07:53:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.832 07:53:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.832 07:53:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:04.832 [2024-10-09 07:53:06.594294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:04.832 [2024-10-09 07:53:06.596642] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.832 [2024-10-09 07:53:06.596702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.832 [2024-10-09 07:53:06.596725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.832 [2024-10-09 07:53:06.596759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.832 [2024-10-09 07:53:06.596775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.832 [2024-10-09 07:53:06.596792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.832 [2024-10-09 07:53:06.596819] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.832 [2024-10-09 07:53:06.596846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.832 [2024-10-09 07:53:06.596862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.832 [2024-10-09 07:53:06.596880] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.832 [2024-10-09 07:53:06.596894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.832 [2024-10-09 07:53:06.596910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.832 07:53:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.832 07:53:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:04.832 07:53:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:04.832 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:05.090 [2024-10-09 07:53:07.094297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:05.090 [2024-10-09 07:53:07.097307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.090 [2024-10-09 07:53:07.097390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.090 [2024-10-09 07:53:07.097417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.090 [2024-10-09 07:53:07.097444] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.090 [2024-10-09 07:53:07.097470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.090 [2024-10-09 07:53:07.097495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.090 [2024-10-09 07:53:07.097533] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.090 [2024-10-09 07:53:07.097551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.091 [2024-10-09 07:53:07.097567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.091 [2024-10-09 07:53:07.097592] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.091 [2024-10-09 07:53:07.097620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.091 [2024-10-09 07:53:07.097644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.349 07:53:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.349 07:53:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.349 07:53:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:05.349 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:05.607 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.06 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.06 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.06 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.06 2 00:13:17.840 remove_attach_helper took 45.06s to complete (handling 2 nvme drive(s)) 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:17.840 07:53:19 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69211 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 69211 ']' 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 69211 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69211 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.840 killing process with pid 69211 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69211' 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@969 -- # kill 69211 00:13:17.840 07:53:19 sw_hotplug -- common/autotest_common.sh@974 -- # wait 69211 00:13:20.371 07:53:21 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:20.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.629 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.629 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.886 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:20.886 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:20.886 00:13:20.886 real 2m31.572s 00:13:20.886 user 1m52.152s 00:13:20.886 sys 0m19.138s 00:13:20.886 07:53:22 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.886 07:53:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.886 ************************************ 00:13:20.886 END TEST sw_hotplug 00:13:20.886 ************************************ 00:13:20.886 07:53:22 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:20.886 07:53:22 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:20.886 07:53:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:20.886 07:53:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.886 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:20.886 ************************************ 00:13:20.886 START TEST nvme_xnvme 00:13:20.886 ************************************ 00:13:20.886 07:53:22 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:21.145 * Looking for test storage... 00:13:21.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:21.145 07:53:22 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:21.145 07:53:22 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:21.145 07:53:22 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.145 --rc genhtml_branch_coverage=1 00:13:21.145 --rc genhtml_function_coverage=1 00:13:21.145 --rc genhtml_legend=1 00:13:21.145 --rc geninfo_all_blocks=1 00:13:21.145 --rc geninfo_unexecuted_blocks=1 00:13:21.145 00:13:21.145 ' 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.145 --rc genhtml_branch_coverage=1 00:13:21.145 --rc genhtml_function_coverage=1 00:13:21.145 --rc genhtml_legend=1 00:13:21.145 --rc geninfo_all_blocks=1 00:13:21.145 --rc geninfo_unexecuted_blocks=1 00:13:21.145 00:13:21.145 ' 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.145 --rc genhtml_branch_coverage=1 00:13:21.145 --rc genhtml_function_coverage=1 00:13:21.145 --rc genhtml_legend=1 00:13:21.145 --rc geninfo_all_blocks=1 00:13:21.145 --rc geninfo_unexecuted_blocks=1 00:13:21.145 00:13:21.145 ' 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.145 --rc genhtml_branch_coverage=1 00:13:21.145 --rc genhtml_function_coverage=1 00:13:21.145 --rc genhtml_legend=1 00:13:21.145 --rc geninfo_all_blocks=1 00:13:21.145 --rc geninfo_unexecuted_blocks=1 00:13:21.145 00:13:21.145 ' 00:13:21.145 07:53:23 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.145 07:53:23 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.145 07:53:23 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.145 07:53:23 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.145 07:53:23 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.145 07:53:23 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:21.145 07:53:23 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.145 07:53:23 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.145 07:53:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.145 ************************************ 00:13:21.145 START TEST xnvme_to_malloc_dd_copy 00:13:21.145 ************************************ 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:21.145 07:53:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:21.145 { 00:13:21.145 "subsystems": [ 00:13:21.145 { 00:13:21.145 "subsystem": "bdev", 00:13:21.145 "config": [ 00:13:21.145 { 00:13:21.145 "params": { 00:13:21.145 "block_size": 512, 00:13:21.145 "num_blocks": 2097152, 00:13:21.145 "name": "malloc0" 00:13:21.145 }, 00:13:21.145 "method": "bdev_malloc_create" 00:13:21.145 }, 00:13:21.145 { 00:13:21.145 "params": { 00:13:21.145 "io_mechanism": "libaio", 00:13:21.145 "filename": "/dev/nullb0", 00:13:21.145 "name": "null0" 00:13:21.145 }, 00:13:21.145 "method": "bdev_xnvme_create" 00:13:21.145 }, 00:13:21.145 { 00:13:21.145 "method": "bdev_wait_for_examine" 00:13:21.145 } 00:13:21.145 ] 00:13:21.145 } 00:13:21.145 ] 00:13:21.145 } 00:13:21.402 [2024-10-09 07:53:23.179585] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:13:21.402 [2024-10-09 07:53:23.179734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70566 ] 00:13:21.402 [2024-10-09 07:53:23.346685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.660 [2024-10-09 07:53:23.578149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.190  [2024-10-09T07:53:27.135Z] Copying: 176/1024 [MB] (176 MBps) [2024-10-09T07:53:28.097Z] Copying: 353/1024 [MB] (176 MBps) [2024-10-09T07:53:29.032Z] Copying: 530/1024 [MB] (176 MBps) [2024-10-09T07:53:29.968Z] Copying: 704/1024 [MB] (174 MBps) [2024-10-09T07:53:30.921Z] Copying: 875/1024 [MB] (170 MBps) [2024-10-09T07:53:35.107Z] Copying: 1024/1024 [MB] (average 174 MBps) 00:13:33.095 00:13:33.095 07:53:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:33.095 07:53:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:33.095 07:53:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:33.095 07:53:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:33.095 { 00:13:33.095 "subsystems": [ 00:13:33.095 { 00:13:33.095 "subsystem": "bdev", 00:13:33.095 "config": [ 00:13:33.095 { 00:13:33.095 "params": { 00:13:33.095 "block_size": 512, 00:13:33.095 "num_blocks": 2097152, 00:13:33.095 "name": "malloc0" 00:13:33.095 }, 00:13:33.095 "method": "bdev_malloc_create" 00:13:33.095 }, 00:13:33.095 { 00:13:33.095 "params": { 00:13:33.095 "io_mechanism": "libaio", 00:13:33.095 "filename": "/dev/nullb0", 00:13:33.095 "name": "null0" 00:13:33.095 }, 00:13:33.095 "method": "bdev_xnvme_create" 00:13:33.095 }, 00:13:33.095 { 00:13:33.095 "method": "bdev_wait_for_examine" 00:13:33.095 } 00:13:33.095 ] 00:13:33.095 } 00:13:33.095 ] 00:13:33.095 } 00:13:33.095 [2024-10-09 07:53:34.368587] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:13:33.095 [2024-10-09 07:53:34.368766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70694 ] 00:13:33.095 [2024-10-09 07:53:34.552113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.095 [2024-10-09 07:53:34.739465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.015  [2024-10-09T07:53:38.035Z] Copying: 171/1024 [MB] (171 MBps) [2024-10-09T07:53:39.408Z] Copying: 342/1024 [MB] (170 MBps) [2024-10-09T07:53:40.342Z] Copying: 511/1024 [MB] (169 MBps) [2024-10-09T07:53:41.306Z] Copying: 684/1024 [MB] (172 MBps) [2024-10-09T07:53:42.256Z] Copying: 852/1024 [MB] (168 MBps) [2024-10-09T07:53:42.256Z] Copying: 1023/1024 [MB] (171 MBps) [2024-10-09T07:53:46.441Z] Copying: 1024/1024 [MB] (average 170 MBps) 00:13:44.429 00:13:44.429 07:53:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:44.429 07:53:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:44.429 07:53:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:44.429 07:53:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:44.429 07:53:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:44.429 07:53:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:44.429 { 00:13:44.429 "subsystems": [ 00:13:44.429 { 00:13:44.429 "subsystem": "bdev", 00:13:44.429 "config": [ 00:13:44.429 { 00:13:44.429 "params": { 00:13:44.429 "block_size": 512, 00:13:44.429 "num_blocks": 2097152, 00:13:44.429 "name": "malloc0" 00:13:44.429 }, 00:13:44.429 "method": "bdev_malloc_create" 00:13:44.429 }, 00:13:44.429 { 00:13:44.429 "params": { 00:13:44.429 "io_mechanism": "io_uring", 00:13:44.429 "filename": "/dev/nullb0", 00:13:44.429 "name": "null0" 00:13:44.429 }, 00:13:44.429 "method": "bdev_xnvme_create" 00:13:44.429 }, 00:13:44.429 { 00:13:44.429 "method": "bdev_wait_for_examine" 00:13:44.429 } 00:13:44.429 ] 00:13:44.429 } 00:13:44.429 ] 00:13:44.429 } 00:13:44.429 [2024-10-09 07:53:45.760680] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:13:44.429 [2024-10-09 07:53:45.760857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70821 ] 00:13:44.429 [2024-10-09 07:53:45.938882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.429 [2024-10-09 07:53:46.174119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.957  [2024-10-09T07:53:49.536Z] Copying: 186/1024 [MB] (186 MBps) [2024-10-09T07:53:50.500Z] Copying: 374/1024 [MB] (187 MBps) [2024-10-09T07:53:51.874Z] Copying: 560/1024 [MB] (186 MBps) [2024-10-09T07:53:52.861Z] Copying: 747/1024 [MB] (186 MBps) [2024-10-09T07:53:53.118Z] Copying: 931/1024 [MB] (184 MBps) [2024-10-09T07:53:57.303Z] Copying: 1024/1024 [MB] (average 186 MBps) 00:13:55.291 00:13:55.291 07:53:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:55.291 07:53:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:55.291 07:53:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:55.291 07:53:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:55.291 { 00:13:55.291 "subsystems": [ 00:13:55.291 { 00:13:55.291 "subsystem": "bdev", 00:13:55.291 "config": [ 00:13:55.291 { 00:13:55.291 "params": { 00:13:55.291 "block_size": 512, 00:13:55.291 "num_blocks": 2097152, 00:13:55.291 "name": "malloc0" 00:13:55.291 }, 00:13:55.291 "method": "bdev_malloc_create" 00:13:55.291 }, 00:13:55.291 { 00:13:55.291 "params": { 00:13:55.291 "io_mechanism": "io_uring", 00:13:55.291 "filename": "/dev/nullb0", 00:13:55.291 "name": "null0" 00:13:55.291 }, 00:13:55.291 "method": "bdev_xnvme_create" 00:13:55.291 }, 00:13:55.291 { 00:13:55.291 "method": "bdev_wait_for_examine" 00:13:55.291 } 00:13:55.291 ] 00:13:55.291 } 00:13:55.291 ] 00:13:55.291 } 00:13:55.291 [2024-10-09 07:53:56.633590] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:13:55.291 [2024-10-09 07:53:56.633791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70936 ] 00:13:55.291 [2024-10-09 07:53:56.801060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.291 [2024-10-09 07:53:57.026970] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.848  [2024-10-09T07:54:00.426Z] Copying: 183/1024 [MB] (183 MBps) [2024-10-09T07:54:01.362Z] Copying: 363/1024 [MB] (179 MBps) [2024-10-09T07:54:02.297Z] Copying: 547/1024 [MB] (183 MBps) [2024-10-09T07:54:03.672Z] Copying: 729/1024 [MB] (182 MBps) [2024-10-09T07:54:03.930Z] Copying: 913/1024 [MB] (183 MBps) [2024-10-09T07:54:08.164Z] Copying: 1024/1024 [MB] (average 182 MBps) 00:14:06.152 00:14:06.153 07:54:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:06.153 07:54:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:06.153 00:14:06.153 real 0m44.527s 00:14:06.153 user 0m39.174s 00:14:06.153 sys 0m4.755s 00:14:06.153 07:54:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.153 ************************************ 00:14:06.153 END TEST xnvme_to_malloc_dd_copy 00:14:06.153 ************************************ 00:14:06.153 07:54:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:06.153 07:54:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:06.153 07:54:07 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:06.153 07:54:07 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.153 07:54:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.153 ************************************ 00:14:06.153 START TEST xnvme_bdevperf 00:14:06.153 ************************************ 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:06.153 07:54:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:06.153 { 00:14:06.153 "subsystems": [ 00:14:06.153 { 00:14:06.153 "subsystem": "bdev", 00:14:06.153 "config": [ 00:14:06.153 { 00:14:06.153 "params": { 00:14:06.153 "io_mechanism": "libaio", 00:14:06.153 "filename": "/dev/nullb0", 00:14:06.153 "name": "null0" 00:14:06.153 }, 00:14:06.153 "method": "bdev_xnvme_create" 00:14:06.153 }, 00:14:06.153 { 00:14:06.153 "method": "bdev_wait_for_examine" 00:14:06.153 } 00:14:06.153 ] 00:14:06.153 } 00:14:06.153 ] 00:14:06.153 } 00:14:06.153 [2024-10-09 07:54:07.770964] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:14:06.153 [2024-10-09 07:54:07.771151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71084 ] 00:14:06.153 [2024-10-09 07:54:07.946660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.153 [2024-10-09 07:54:08.141097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.719 Running I/O for 5 seconds... 00:14:08.588 115968.00 IOPS, 453.00 MiB/s [2024-10-09T07:54:11.534Z] 114752.00 IOPS, 448.25 MiB/s [2024-10-09T07:54:12.469Z] 115114.67 IOPS, 449.67 MiB/s [2024-10-09T07:54:13.468Z] 114832.00 IOPS, 448.56 MiB/s 00:14:11.456 Latency(us) 00:14:11.456 [2024-10-09T07:54:13.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.456 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:11.456 null0 : 5.00 114547.32 447.45 0.00 0.00 555.29 178.73 2442.71 00:14:11.456 [2024-10-09T07:54:13.468Z] =================================================================================================================== 00:14:11.456 [2024-10-09T07:54:13.468Z] Total : 114547.32 447.45 0.00 0.00 555.29 178.73 2442.71 00:14:12.830 07:54:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:12.830 07:54:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:12.830 07:54:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:12.830 07:54:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:12.830 07:54:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:12.830 07:54:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:12.830 { 00:14:12.830 "subsystems": [ 00:14:12.830 { 00:14:12.830 "subsystem": "bdev", 00:14:12.830 "config": [ 00:14:12.830 { 00:14:12.830 "params": { 00:14:12.830 "io_mechanism": "io_uring", 00:14:12.830 "filename": "/dev/nullb0", 00:14:12.830 "name": "null0" 00:14:12.830 }, 00:14:12.830 "method": "bdev_xnvme_create" 00:14:12.830 }, 00:14:12.830 { 00:14:12.830 "method": "bdev_wait_for_examine" 00:14:12.830 } 00:14:12.830 ] 00:14:12.831 } 00:14:12.831 ] 00:14:12.831 } 00:14:12.831 [2024-10-09 07:54:14.724162] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:14:12.831 [2024-10-09 07:54:14.724321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:14:13.088 [2024-10-09 07:54:14.903814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.088 [2024-10-09 07:54:15.094832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.664 Running I/O for 5 seconds... 00:14:15.528 150336.00 IOPS, 587.25 MiB/s [2024-10-09T07:54:18.473Z] 150592.00 IOPS, 588.25 MiB/s [2024-10-09T07:54:19.845Z] 149418.67 IOPS, 583.67 MiB/s [2024-10-09T07:54:20.410Z] 149328.00 IOPS, 583.31 MiB/s 00:14:18.398 Latency(us) 00:14:18.398 [2024-10-09T07:54:20.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.398 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:18.398 null0 : 5.00 148902.09 581.65 0.00 0.00 426.50 251.35 2308.65 00:14:18.398 [2024-10-09T07:54:20.411Z] =================================================================================================================== 00:14:18.399 [2024-10-09T07:54:20.411Z] Total : 148902.09 581.65 0.00 0.00 426.50 251.35 2308.65 00:14:19.773 07:54:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:19.773 07:54:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:19.773 00:14:19.773 real 0m14.002s 00:14:19.773 user 0m11.040s 00:14:19.773 sys 0m2.731s 00:14:19.773 07:54:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.773 07:54:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.773 ************************************ 00:14:19.773 END TEST xnvme_bdevperf 00:14:19.773 ************************************ 00:14:19.773 00:14:19.773 real 0m58.805s 00:14:19.773 user 0m50.367s 00:14:19.773 sys 0m7.607s 00:14:19.773 07:54:21 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.773 07:54:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.773 ************************************ 00:14:19.773 END TEST nvme_xnvme 00:14:19.773 ************************************ 00:14:19.773 07:54:21 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:19.773 07:54:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:19.773 07:54:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.773 07:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:19.773 ************************************ 00:14:19.773 START TEST blockdev_xnvme 00:14:19.773 ************************************ 00:14:19.773 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:19.773 * Looking for test storage... 00:14:19.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:19.773 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:19.773 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:14:19.773 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.032 07:54:21 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:20.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.032 --rc genhtml_branch_coverage=1 00:14:20.032 --rc genhtml_function_coverage=1 00:14:20.032 --rc genhtml_legend=1 00:14:20.032 --rc geninfo_all_blocks=1 00:14:20.032 --rc geninfo_unexecuted_blocks=1 00:14:20.032 00:14:20.032 ' 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:20.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.032 --rc genhtml_branch_coverage=1 00:14:20.032 --rc genhtml_function_coverage=1 00:14:20.032 --rc genhtml_legend=1 00:14:20.032 --rc geninfo_all_blocks=1 00:14:20.032 --rc geninfo_unexecuted_blocks=1 00:14:20.032 00:14:20.032 ' 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:20.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.032 --rc genhtml_branch_coverage=1 00:14:20.032 --rc genhtml_function_coverage=1 00:14:20.032 --rc genhtml_legend=1 00:14:20.032 --rc geninfo_all_blocks=1 00:14:20.032 --rc geninfo_unexecuted_blocks=1 00:14:20.032 00:14:20.032 ' 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:20.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.032 --rc genhtml_branch_coverage=1 00:14:20.032 --rc genhtml_function_coverage=1 00:14:20.032 --rc genhtml_legend=1 00:14:20.032 --rc geninfo_all_blocks=1 00:14:20.032 --rc geninfo_unexecuted_blocks=1 00:14:20.032 00:14:20.032 ' 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71312 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71312 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 71312 ']' 00:14:20.032 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.032 07:54:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.032 [2024-10-09 07:54:22.033943] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:14:20.032 [2024-10-09 07:54:22.034125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71312 ] 00:14:20.291 [2024-10-09 07:54:22.208097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.549 [2024-10-09 07:54:22.442242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.484 07:54:23 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.484 07:54:23 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:21.484 07:54:23 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:21.484 07:54:23 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:21.484 07:54:23 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:21.484 07:54:23 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:21.484 07:54:23 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:21.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:21.741 Waiting for block devices as requested 00:14:21.741 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:21.999 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:21.999 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:21.999 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:27.265 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 nvme0n1 00:14:27.265 nvme1n1 00:14:27.265 nvme2n1 00:14:27.265 nvme2n2 00:14:27.265 nvme2n3 00:14:27.265 nvme3n1 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 07:54:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:27.265 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:27.266 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eefec5b1-c084-4464-9a40-55c871f00f85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eefec5b1-c084-4464-9a40-55c871f00f85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7b550ce1-bcc5-40a7-983e-9fcd858f009b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7b550ce1-bcc5-40a7-983e-9fcd858f009b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ca86e90b-9940-4327-908a-1553cd6c174b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca86e90b-9940-4327-908a-1553cd6c174b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9c0c8cd1-7557-4af6-bc05-da8965539455"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c0c8cd1-7557-4af6-bc05-da8965539455",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "92666c67-bf8d-43b5-9ec3-a47198143ec0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92666c67-bf8d-43b5-9ec3-a47198143ec0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "202e1fda-ac3e-4747-94a3-b4fc87462bd0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "202e1fda-ac3e-4747-94a3-b4fc87462bd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:27.524 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:27.524 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:27.524 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:27.524 07:54:29 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71312 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 71312 ']' 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 71312 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71312 00:14:27.524 killing process with pid 71312 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71312' 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 71312 00:14:27.524 07:54:29 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 71312 00:14:30.063 07:54:31 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:30.063 07:54:31 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:30.063 07:54:31 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:30.063 07:54:31 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.063 07:54:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.063 ************************************ 00:14:30.063 START TEST bdev_hello_world 00:14:30.063 ************************************ 00:14:30.063 07:54:31 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:30.063 [2024-10-09 07:54:31.680143] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:14:30.063 [2024-10-09 07:54:31.680293] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71688 ] 00:14:30.063 [2024-10-09 07:54:31.841412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.063 [2024-10-09 07:54:32.027972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.630 [2024-10-09 07:54:32.436541] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:30.630 [2024-10-09 07:54:32.436636] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:30.630 [2024-10-09 07:54:32.436684] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:30.630 [2024-10-09 07:54:32.441184] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:30.630 [2024-10-09 07:54:32.441604] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:30.630 [2024-10-09 07:54:32.441664] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:30.630 [2024-10-09 07:54:32.441811] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:30.630 00:14:30.630 [2024-10-09 07:54:32.441865] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:32.005 00:14:32.005 real 0m2.028s 00:14:32.005 user 0m1.706s 00:14:32.005 sys 0m0.204s 00:14:32.005 07:54:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.005 ************************************ 00:14:32.005 END TEST bdev_hello_world 00:14:32.005 ************************************ 00:14:32.005 07:54:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:32.005 07:54:33 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:32.005 07:54:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.005 07:54:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.005 07:54:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.005 ************************************ 00:14:32.005 START TEST bdev_bounds 00:14:32.005 ************************************ 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:32.005 Process bdevio pid: 71726 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71726 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71726' 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71726 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71726 ']' 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.005 07:54:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:32.005 [2024-10-09 07:54:33.760982] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:14:32.005 [2024-10-09 07:54:33.761142] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71726 ] 00:14:32.005 [2024-10-09 07:54:33.941974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:32.282 [2024-10-09 07:54:34.173690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.282 [2024-10-09 07:54:34.173759] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.282 [2024-10-09 07:54:34.173765] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.857 07:54:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.857 07:54:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:32.857 07:54:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:33.114 I/O targets: 00:14:33.114 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:33.114 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:33.114 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:33.114 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:33.114 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:33.114 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:33.114 00:14:33.114 00:14:33.114 CUnit - A unit testing framework for C - Version 2.1-3 00:14:33.114 http://cunit.sourceforge.net/ 00:14:33.114 00:14:33.114 00:14:33.114 Suite: bdevio tests on: nvme3n1 00:14:33.114 Test: blockdev write read block ...passed 00:14:33.114 Test: blockdev write zeroes read block ...passed 00:14:33.114 Test: blockdev write zeroes read no split ...passed 00:14:33.114 Test: blockdev write zeroes read split ...passed 00:14:33.114 Test: blockdev write zeroes read split partial ...passed 00:14:33.114 Test: blockdev reset ...passed 00:14:33.114 Test: blockdev write read 8 blocks ...passed 00:14:33.114 Test: blockdev write read size > 128k ...passed 00:14:33.114 Test: blockdev write read invalid size ...passed 00:14:33.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.114 Test: blockdev write read max offset ...passed 00:14:33.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.114 Test: blockdev writev readv 8 blocks ...passed 00:14:33.114 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.114 Test: blockdev writev readv block ...passed 00:14:33.114 Test: blockdev writev readv size > 128k ...passed 00:14:33.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.114 Test: blockdev comparev and writev ...passed 00:14:33.114 Test: blockdev nvme passthru rw ...passed 00:14:33.114 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.114 Test: blockdev nvme admin passthru ...passed 00:14:33.114 Test: blockdev copy ...passed 00:14:33.114 Suite: bdevio tests on: nvme2n3 00:14:33.114 Test: blockdev write read block ...passed 00:14:33.114 Test: blockdev write zeroes read block ...passed 00:14:33.114 Test: blockdev write zeroes read no split ...passed 00:14:33.114 Test: blockdev write zeroes read split ...passed 00:14:33.114 Test: blockdev write zeroes read split partial ...passed 00:14:33.114 Test: blockdev reset ...passed 00:14:33.114 Test: blockdev write read 8 blocks ...passed 00:14:33.114 Test: blockdev write read size > 128k ...passed 00:14:33.114 Test: blockdev write read invalid size ...passed 00:14:33.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.115 Test: blockdev write read max offset ...passed 00:14:33.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.115 Test: blockdev writev readv 8 blocks ...passed 00:14:33.115 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.115 Test: blockdev writev readv block ...passed 00:14:33.115 Test: blockdev writev readv size > 128k ...passed 00:14:33.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.115 Test: blockdev comparev and writev ...passed 00:14:33.115 Test: blockdev nvme passthru rw ...passed 00:14:33.115 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.115 Test: blockdev nvme admin passthru ...passed 00:14:33.115 Test: blockdev copy ...passed 00:14:33.115 Suite: bdevio tests on: nvme2n2 00:14:33.115 Test: blockdev write read block ...passed 00:14:33.115 Test: blockdev write zeroes read block ...passed 00:14:33.115 Test: blockdev write zeroes read no split ...passed 00:14:33.115 Test: blockdev write zeroes read split ...passed 00:14:33.115 Test: blockdev write zeroes read split partial ...passed 00:14:33.115 Test: blockdev reset ...passed 00:14:33.115 Test: blockdev write read 8 blocks ...passed 00:14:33.115 Test: blockdev write read size > 128k ...passed 00:14:33.115 Test: blockdev write read invalid size ...passed 00:14:33.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.115 Test: blockdev write read max offset ...passed 00:14:33.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.115 Test: blockdev writev readv 8 blocks ...passed 00:14:33.115 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.115 Test: blockdev writev readv block ...passed 00:14:33.115 Test: blockdev writev readv size > 128k ...passed 00:14:33.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.115 Test: blockdev comparev and writev ...passed 00:14:33.115 Test: blockdev nvme passthru rw ...passed 00:14:33.115 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.115 Test: blockdev nvme admin passthru ...passed 00:14:33.115 Test: blockdev copy ...passed 00:14:33.115 Suite: bdevio tests on: nvme2n1 00:14:33.115 Test: blockdev write read block ...passed 00:14:33.115 Test: blockdev write zeroes read block ...passed 00:14:33.115 Test: blockdev write zeroes read no split ...passed 00:14:33.373 Test: blockdev write zeroes read split ...passed 00:14:33.373 Test: blockdev write zeroes read split partial ...passed 00:14:33.373 Test: blockdev reset ...passed 00:14:33.373 Test: blockdev write read 8 blocks ...passed 00:14:33.373 Test: blockdev write read size > 128k ...passed 00:14:33.373 Test: blockdev write read invalid size ...passed 00:14:33.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.373 Test: blockdev write read max offset ...passed 00:14:33.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.373 Test: blockdev writev readv 8 blocks ...passed 00:14:33.373 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.373 Test: blockdev writev readv block ...passed 00:14:33.373 Test: blockdev writev readv size > 128k ...passed 00:14:33.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.373 Test: blockdev comparev and writev ...passed 00:14:33.373 Test: blockdev nvme passthru rw ...passed 00:14:33.373 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.373 Test: blockdev nvme admin passthru ...passed 00:14:33.373 Test: blockdev copy ...passed 00:14:33.373 Suite: bdevio tests on: nvme1n1 00:14:33.373 Test: blockdev write read block ...passed 00:14:33.373 Test: blockdev write zeroes read block ...passed 00:14:33.373 Test: blockdev write zeroes read no split ...passed 00:14:33.373 Test: blockdev write zeroes read split ...passed 00:14:33.373 Test: blockdev write zeroes read split partial ...passed 00:14:33.373 Test: blockdev reset ...passed 00:14:33.373 Test: blockdev write read 8 blocks ...passed 00:14:33.373 Test: blockdev write read size > 128k ...passed 00:14:33.373 Test: blockdev write read invalid size ...passed 00:14:33.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.373 Test: blockdev write read max offset ...passed 00:14:33.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.373 Test: blockdev writev readv 8 blocks ...passed 00:14:33.373 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.373 Test: blockdev writev readv block ...passed 00:14:33.373 Test: blockdev writev readv size > 128k ...passed 00:14:33.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.373 Test: blockdev comparev and writev ...passed 00:14:33.373 Test: blockdev nvme passthru rw ...passed 00:14:33.373 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.373 Test: blockdev nvme admin passthru ...passed 00:14:33.373 Test: blockdev copy ...passed 00:14:33.373 Suite: bdevio tests on: nvme0n1 00:14:33.373 Test: blockdev write read block ...passed 00:14:33.373 Test: blockdev write zeroes read block ...passed 00:14:33.373 Test: blockdev write zeroes read no split ...passed 00:14:33.373 Test: blockdev write zeroes read split ...passed 00:14:33.373 Test: blockdev write zeroes read split partial ...passed 00:14:33.373 Test: blockdev reset ...passed 00:14:33.373 Test: blockdev write read 8 blocks ...passed 00:14:33.373 Test: blockdev write read size > 128k ...passed 00:14:33.373 Test: blockdev write read invalid size ...passed 00:14:33.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.373 Test: blockdev write read max offset ...passed 00:14:33.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.373 Test: blockdev writev readv 8 blocks ...passed 00:14:33.373 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.373 Test: blockdev writev readv block ...passed 00:14:33.373 Test: blockdev writev readv size > 128k ...passed 00:14:33.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.373 Test: blockdev comparev and writev ...passed 00:14:33.373 Test: blockdev nvme passthru rw ...passed 00:14:33.373 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.373 Test: blockdev nvme admin passthru ...passed 00:14:33.373 Test: blockdev copy ...passed 00:14:33.373 00:14:33.373 Run Summary: Type Total Ran Passed Failed Inactive 00:14:33.373 suites 6 6 n/a 0 0 00:14:33.373 tests 138 138 138 0 0 00:14:33.373 asserts 780 780 780 0 n/a 00:14:33.373 00:14:33.373 Elapsed time = 1.183 seconds 00:14:33.373 0 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71726 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71726 ']' 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71726 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71726 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.373 killing process with pid 71726 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71726' 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71726 00:14:33.373 07:54:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71726 00:14:34.748 07:54:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:34.748 00:14:34.748 real 0m2.857s 00:14:34.748 user 0m6.805s 00:14:34.748 sys 0m0.401s 00:14:34.748 07:54:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.748 ************************************ 00:14:34.748 END TEST bdev_bounds 00:14:34.748 ************************************ 00:14:34.748 07:54:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:34.748 07:54:36 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:34.748 07:54:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:34.748 07:54:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.748 07:54:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.748 ************************************ 00:14:34.748 START TEST bdev_nbd 00:14:34.748 ************************************ 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71791 00:14:34.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71791 /var/tmp/spdk-nbd.sock 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71791 ']' 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.748 07:54:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:34.748 [2024-10-09 07:54:36.708230] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:14:34.748 [2024-10-09 07:54:36.708758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.007 [2024-10-09 07:54:36.887499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.265 [2024-10-09 07:54:37.104382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:35.847 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.414 1+0 records in 00:14:36.414 1+0 records out 00:14:36.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006524 s, 6.3 MB/s 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.414 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.722 1+0 records in 00:14:36.722 1+0 records out 00:14:36.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804774 s, 5.1 MB/s 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:36.722 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.981 1+0 records in 00:14:36.981 1+0 records out 00:14:36.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686017 s, 6.0 MB/s 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:36.981 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.240 1+0 records in 00:14:37.240 1+0 records out 00:14:37.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053019 s, 7.7 MB/s 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.240 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:37.498 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.499 1+0 records in 00:14:37.499 1+0 records out 00:14:37.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526912 s, 7.8 MB/s 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.499 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.757 1+0 records in 00:14:37.757 1+0 records out 00:14:37.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613171 s, 6.7 MB/s 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.757 07:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd0", 00:14:38.324 "bdev_name": "nvme0n1" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd1", 00:14:38.324 "bdev_name": "nvme1n1" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd2", 00:14:38.324 "bdev_name": "nvme2n1" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd3", 00:14:38.324 "bdev_name": "nvme2n2" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd4", 00:14:38.324 "bdev_name": "nvme2n3" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd5", 00:14:38.324 "bdev_name": "nvme3n1" 00:14:38.324 } 00:14:38.324 ]' 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd0", 00:14:38.324 "bdev_name": "nvme0n1" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd1", 00:14:38.324 "bdev_name": "nvme1n1" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd2", 00:14:38.324 "bdev_name": "nvme2n1" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd3", 00:14:38.324 "bdev_name": "nvme2n2" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd4", 00:14:38.324 "bdev_name": "nvme2n3" 00:14:38.324 }, 00:14:38.324 { 00:14:38.324 "nbd_device": "/dev/nbd5", 00:14:38.324 "bdev_name": "nvme3n1" 00:14:38.324 } 00:14:38.324 ]' 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.324 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.583 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.842 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.101 07:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.359 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.617 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.875 07:54:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:40.134 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:40.699 /dev/nbd0 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.699 1+0 records in 00:14:40.699 1+0 records out 00:14:40.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577551 s, 7.1 MB/s 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:40.699 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:40.956 /dev/nbd1 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.956 1+0 records in 00:14:40.956 1+0 records out 00:14:40.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476537 s, 8.6 MB/s 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.956 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:40.957 07:54:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:41.214 /dev/nbd10 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.214 1+0 records in 00:14:41.214 1+0 records out 00:14:41.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476331 s, 8.6 MB/s 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.214 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.215 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:41.482 /dev/nbd11 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.739 1+0 records in 00:14:41.739 1+0 records out 00:14:41.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571554 s, 7.2 MB/s 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.739 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:42.039 /dev/nbd12 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.039 1+0 records in 00:14:42.039 1+0 records out 00:14:42.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100653 s, 4.1 MB/s 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:42.039 07:54:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:42.297 /dev/nbd13 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.297 1+0 records in 00:14:42.297 1+0 records out 00:14:42.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000932409 s, 4.4 MB/s 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.297 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd0", 00:14:42.555 "bdev_name": "nvme0n1" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd1", 00:14:42.555 "bdev_name": "nvme1n1" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd10", 00:14:42.555 "bdev_name": "nvme2n1" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd11", 00:14:42.555 "bdev_name": "nvme2n2" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd12", 00:14:42.555 "bdev_name": "nvme2n3" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd13", 00:14:42.555 "bdev_name": "nvme3n1" 00:14:42.555 } 00:14:42.555 ]' 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd0", 00:14:42.555 "bdev_name": "nvme0n1" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd1", 00:14:42.555 "bdev_name": "nvme1n1" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd10", 00:14:42.555 "bdev_name": "nvme2n1" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd11", 00:14:42.555 "bdev_name": "nvme2n2" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd12", 00:14:42.555 "bdev_name": "nvme2n3" 00:14:42.555 }, 00:14:42.555 { 00:14:42.555 "nbd_device": "/dev/nbd13", 00:14:42.555 "bdev_name": "nvme3n1" 00:14:42.555 } 00:14:42.555 ]' 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:42.555 /dev/nbd1 00:14:42.555 /dev/nbd10 00:14:42.555 /dev/nbd11 00:14:42.555 /dev/nbd12 00:14:42.555 /dev/nbd13' 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:42.555 /dev/nbd1 00:14:42.555 /dev/nbd10 00:14:42.555 /dev/nbd11 00:14:42.555 /dev/nbd12 00:14:42.555 /dev/nbd13' 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:42.555 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:42.555 256+0 records in 00:14:42.555 256+0 records out 00:14:42.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634807 s, 165 MB/s 00:14:42.556 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.556 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:42.814 256+0 records in 00:14:42.814 256+0 records out 00:14:42.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121557 s, 8.6 MB/s 00:14:42.814 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.814 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:42.814 256+0 records in 00:14:42.814 256+0 records out 00:14:42.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121344 s, 8.6 MB/s 00:14:42.814 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.814 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:43.071 256+0 records in 00:14:43.071 256+0 records out 00:14:43.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121583 s, 8.6 MB/s 00:14:43.072 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:43.072 07:54:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:43.072 256+0 records in 00:14:43.072 256+0 records out 00:14:43.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110991 s, 9.4 MB/s 00:14:43.072 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:43.072 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:43.329 256+0 records in 00:14:43.329 256+0 records out 00:14:43.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125298 s, 8.4 MB/s 00:14:43.329 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:43.329 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:43.329 256+0 records in 00:14:43.329 256+0 records out 00:14:43.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125959 s, 8.3 MB/s 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.330 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.895 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.152 07:54:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.409 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.667 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:44.925 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.926 07:54:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:45.198 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:45.763 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:46.020 malloc_lvol_verify 00:14:46.020 07:54:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:46.278 7475ceb7-7b42-4d85-8ec4-ae14b7eabf0e 00:14:46.278 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:46.536 71e92ea6-c253-4e1d-9c1a-588bc37233d4 00:14:46.536 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:46.795 /dev/nbd0 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:46.795 mke2fs 1.47.0 (5-Feb-2023) 00:14:46.795 Discarding device blocks: 0/4096 done 00:14:46.795 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:46.795 00:14:46.795 Allocating group tables: 0/1 done 00:14:46.795 Writing inode tables: 0/1 done 00:14:46.795 Creating journal (1024 blocks): done 00:14:46.795 Writing superblocks and filesystem accounting information: 0/1 done 00:14:46.795 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.795 07:54:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71791 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71791 ']' 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71791 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71791 00:14:47.052 killing process with pid 71791 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71791' 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71791 00:14:47.052 07:54:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71791 00:14:48.428 ************************************ 00:14:48.428 END TEST bdev_nbd 00:14:48.428 ************************************ 00:14:48.428 07:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:48.428 00:14:48.428 real 0m13.799s 00:14:48.428 user 0m20.005s 00:14:48.428 sys 0m4.234s 00:14:48.428 07:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.428 07:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:48.428 07:54:50 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:48.428 07:54:50 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:48.428 07:54:50 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:48.428 07:54:50 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:48.428 07:54:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:48.428 07:54:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.428 07:54:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.428 ************************************ 00:14:48.428 START TEST bdev_fio 00:14:48.428 ************************************ 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:48.428 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:48.428 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:48.687 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:48.687 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:48.687 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:48.687 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 ************************************ 00:14:48.688 START TEST bdev_fio_rw_verify 00:14:48.688 ************************************ 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:48.688 07:54:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:48.946 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:48.946 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:48.946 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:48.946 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:48.946 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:48.946 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:48.946 fio-3.35 00:14:48.946 Starting 6 threads 00:15:01.199 00:15:01.199 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72224: Wed Oct 9 07:55:01 2024 00:15:01.199 read: IOPS=25.6k, BW=99.9MiB/s (105MB/s)(999MiB/10001msec) 00:15:01.199 slat (usec): min=3, max=1372, avg= 7.95, stdev= 5.79 00:15:01.199 clat (usec): min=128, max=8589, avg=728.71, stdev=335.28 00:15:01.199 lat (usec): min=139, max=8605, avg=736.66, stdev=336.10 00:15:01.199 clat percentiles (usec): 00:15:01.199 | 50.000th=[ 742], 99.000th=[ 1401], 99.900th=[ 4359], 99.990th=[ 7898], 00:15:01.199 | 99.999th=[ 8586] 00:15:01.199 write: IOPS=25.9k, BW=101MiB/s (106MB/s)(1011MiB/10001msec); 0 zone resets 00:15:01.199 slat (usec): min=14, max=3754, avg=30.22, stdev=33.70 00:15:01.199 clat (usec): min=87, max=9216, avg=810.90, stdev=360.81 00:15:01.199 lat (usec): min=106, max=9501, avg=841.11, stdev=363.69 00:15:01.199 clat percentiles (usec): 00:15:01.199 | 50.000th=[ 816], 99.000th=[ 1565], 99.900th=[ 4817], 99.990th=[ 8717], 00:15:01.199 | 99.999th=[ 9241] 00:15:01.199 bw ( KiB/s): min=88192, max=128855, per=100.00%, avg=103761.11, stdev=1703.81, samples=114 00:15:01.199 iops : min=22048, max=32213, avg=25940.11, stdev=425.96, samples=114 00:15:01.199 lat (usec) : 100=0.01%, 250=2.01%, 500=16.09%, 750=28.09%, 1000=37.74% 00:15:01.199 lat (msec) : 2=15.52%, 4=0.35%, 10=0.19% 00:15:01.199 cpu : usr=60.59%, sys=26.29%, ctx=6904, majf=0, minf=22322 00:15:01.199 IO depths : 1=12.2%, 2=24.8%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.199 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.199 issued rwts: total=255780,258708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.199 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:01.199 00:15:01.199 Run status group 0 (all jobs): 00:15:01.200 READ: bw=99.9MiB/s (105MB/s), 99.9MiB/s-99.9MiB/s (105MB/s-105MB/s), io=999MiB (1048MB), run=10001-10001msec 00:15:01.200 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=1011MiB (1060MB), run=10001-10001msec 00:15:01.200 ----------------------------------------------------- 00:15:01.200 Suppressions used: 00:15:01.200 count bytes template 00:15:01.200 6 48 /usr/src/fio/parse.c 00:15:01.200 2750 264000 /usr/src/fio/iolog.c 00:15:01.200 1 8 libtcmalloc_minimal.so 00:15:01.200 1 904 libcrypto.so 00:15:01.200 ----------------------------------------------------- 00:15:01.200 00:15:01.200 ************************************ 00:15:01.200 END TEST bdev_fio_rw_verify 00:15:01.200 ************************************ 00:15:01.200 00:15:01.200 real 0m12.561s 00:15:01.200 user 0m38.431s 00:15:01.200 sys 0m16.130s 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eefec5b1-c084-4464-9a40-55c871f00f85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eefec5b1-c084-4464-9a40-55c871f00f85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7b550ce1-bcc5-40a7-983e-9fcd858f009b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7b550ce1-bcc5-40a7-983e-9fcd858f009b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ca86e90b-9940-4327-908a-1553cd6c174b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca86e90b-9940-4327-908a-1553cd6c174b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9c0c8cd1-7557-4af6-bc05-da8965539455"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c0c8cd1-7557-4af6-bc05-da8965539455",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "92666c67-bf8d-43b5-9ec3-a47198143ec0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92666c67-bf8d-43b5-9ec3-a47198143ec0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "202e1fda-ac3e-4747-94a3-b4fc87462bd0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "202e1fda-ac3e-4747-94a3-b4fc87462bd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.200 /home/vagrant/spdk_repo/spdk 00:15:01.200 ************************************ 00:15:01.200 END TEST bdev_fio 00:15:01.200 ************************************ 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:01.200 00:15:01.200 real 0m12.754s 00:15:01.200 user 0m38.535s 00:15:01.200 sys 0m16.216s 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.200 07:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:01.200 07:55:03 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:01.200 07:55:03 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:01.200 07:55:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:01.200 07:55:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.200 07:55:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.458 ************************************ 00:15:01.458 START TEST bdev_verify 00:15:01.458 ************************************ 00:15:01.458 07:55:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:01.458 [2024-10-09 07:55:03.303224] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:01.458 [2024-10-09 07:55:03.303640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72402 ] 00:15:01.716 [2024-10-09 07:55:03.486055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:01.716 [2024-10-09 07:55:03.697585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.716 [2024-10-09 07:55:03.697604] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.283 Running I/O for 5 seconds... 00:15:04.623 23168.00 IOPS, 90.50 MiB/s [2024-10-09T07:55:07.567Z] 22880.00 IOPS, 89.38 MiB/s [2024-10-09T07:55:08.502Z] 21930.67 IOPS, 85.67 MiB/s [2024-10-09T07:55:09.437Z] 21536.00 IOPS, 84.12 MiB/s [2024-10-09T07:55:09.437Z] 21760.00 IOPS, 85.00 MiB/s 00:15:07.425 Latency(us) 00:15:07.425 [2024-10-09T07:55:09.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.425 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x0 length 0xa0000 00:15:07.425 nvme0n1 : 5.06 1568.67 6.13 0.00 0.00 81443.12 12034.79 76260.07 00:15:07.425 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0xa0000 length 0xa0000 00:15:07.425 nvme0n1 : 5.06 1618.42 6.32 0.00 0.00 78934.42 10724.07 71017.19 00:15:07.425 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x0 length 0xbd0bd 00:15:07.425 nvme1n1 : 5.06 2700.20 10.55 0.00 0.00 47118.02 5481.19 67680.81 00:15:07.425 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:07.425 nvme1n1 : 5.08 2785.37 10.88 0.00 0.00 45666.20 3559.80 64821.06 00:15:07.425 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x0 length 0x80000 00:15:07.425 nvme2n1 : 5.05 1570.13 6.13 0.00 0.00 80924.47 10426.18 98184.84 00:15:07.425 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x80000 length 0x80000 00:15:07.425 nvme2n1 : 5.08 1637.77 6.40 0.00 0.00 77564.83 11081.54 75306.82 00:15:07.425 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x0 length 0x80000 00:15:07.425 nvme2n2 : 5.06 1569.50 6.13 0.00 0.00 80721.63 11379.43 88652.33 00:15:07.425 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x80000 length 0x80000 00:15:07.425 nvme2n2 : 5.09 1634.81 6.39 0.00 0.00 77546.51 16562.73 68634.07 00:15:07.425 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x0 length 0x80000 00:15:07.425 nvme2n3 : 5.06 1567.54 6.12 0.00 0.00 80594.39 11081.54 95325.09 00:15:07.425 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x80000 length 0x80000 00:15:07.425 nvme2n3 : 5.08 1636.85 6.39 0.00 0.00 77291.68 12332.68 63867.81 00:15:07.425 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x0 length 0x20000 00:15:07.425 nvme3n1 : 5.08 1586.39 6.20 0.00 0.00 79501.88 5093.93 98661.47 00:15:07.425 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:07.425 Verification LBA range: start 0x20000 length 0x20000 00:15:07.425 nvme3n1 : 5.09 1634.11 6.38 0.00 0.00 77289.05 7000.44 70063.94 00:15:07.425 [2024-10-09T07:55:09.437Z] =================================================================================================================== 00:15:07.425 [2024-10-09T07:55:09.437Z] Total : 21509.76 84.02 0.00 0.00 70794.67 3559.80 98661.47 00:15:08.800 00:15:08.800 real 0m7.397s 00:15:08.800 user 0m11.594s 00:15:08.800 sys 0m1.772s 00:15:08.801 07:55:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.801 07:55:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:08.801 ************************************ 00:15:08.801 END TEST bdev_verify 00:15:08.801 ************************************ 00:15:08.801 07:55:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:08.801 07:55:10 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:08.801 07:55:10 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.801 07:55:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.801 ************************************ 00:15:08.801 START TEST bdev_verify_big_io 00:15:08.801 ************************************ 00:15:08.801 07:55:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:08.801 [2024-10-09 07:55:10.769884] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:08.801 [2024-10-09 07:55:10.770067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72505 ] 00:15:09.060 [2024-10-09 07:55:10.941699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:09.319 [2024-10-09 07:55:11.137051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.319 [2024-10-09 07:55:11.137058] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.965 Running I/O for 5 seconds... 00:15:15.776 944.00 IOPS, 59.00 MiB/s [2024-10-09T07:55:17.788Z] 2568.00 IOPS, 160.50 MiB/s 00:15:15.776 Latency(us) 00:15:15.776 [2024-10-09T07:55:17.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.776 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x0 length 0xa000 00:15:15.776 nvme0n1 : 5.92 132.35 8.27 0.00 0.00 941251.98 116773.24 1044763.00 00:15:15.776 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0xa000 length 0xa000 00:15:15.776 nvme0n1 : 5.95 130.32 8.15 0.00 0.00 947135.55 84839.33 1121023.07 00:15:15.776 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x0 length 0xbd0b 00:15:15.776 nvme1n1 : 5.94 140.07 8.75 0.00 0.00 861339.30 6791.91 876990.84 00:15:15.776 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:15.776 nvme1n1 : 5.96 139.67 8.73 0.00 0.00 860351.12 8936.73 1197283.14 00:15:15.776 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x0 length 0x8000 00:15:15.776 nvme2n1 : 5.93 145.69 9.11 0.00 0.00 799940.80 75783.45 880803.84 00:15:15.776 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x8000 length 0x8000 00:15:15.776 nvme2n1 : 5.99 86.85 5.43 0.00 0.00 1328481.63 160146.15 2852126.72 00:15:15.776 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x0 length 0x8000 00:15:15.776 nvme2n2 : 5.94 90.20 5.64 0.00 0.00 1247116.29 120586.24 2608094.49 00:15:15.776 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x8000 length 0x8000 00:15:15.776 nvme2n2 : 5.99 101.46 6.34 0.00 0.00 1114566.28 32410.53 2287802.18 00:15:15.776 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x0 length 0x8000 00:15:15.776 nvme2n3 : 5.95 115.68 7.23 0.00 0.00 953606.82 11200.70 1609087.53 00:15:15.776 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x8000 length 0x8000 00:15:15.776 nvme2n3 : 6.00 152.12 9.51 0.00 0.00 712975.76 71017.19 1372681.31 00:15:15.776 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:15.776 Verification LBA range: start 0x0 length 0x2000 00:15:15.776 nvme3n1 : 5.95 104.88 6.55 0.00 0.00 1020240.18 11558.17 2150534.05 00:15:15.777 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:15.777 Verification LBA range: start 0x2000 length 0x2000 00:15:15.777 nvme3n1 : 6.03 164.42 10.28 0.00 0.00 641931.65 4915.20 1448941.38 00:15:15.777 [2024-10-09T07:55:17.789Z] =================================================================================================================== 00:15:15.777 [2024-10-09T07:55:17.789Z] Total : 1503.71 93.98 0.00 0.00 915831.46 4915.20 2852126.72 00:15:17.151 00:15:17.151 real 0m8.480s 00:15:17.151 user 0m15.226s 00:15:17.151 sys 0m0.512s 00:15:17.151 07:55:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.151 ************************************ 00:15:17.151 END TEST bdev_verify_big_io 00:15:17.151 ************************************ 00:15:17.151 07:55:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.408 07:55:19 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:17.409 07:55:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:17.409 07:55:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.409 07:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.409 ************************************ 00:15:17.409 START TEST bdev_write_zeroes 00:15:17.409 ************************************ 00:15:17.409 07:55:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:17.409 [2024-10-09 07:55:19.287094] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:17.409 [2024-10-09 07:55:19.287575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72623 ] 00:15:17.667 [2024-10-09 07:55:19.459459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.926 [2024-10-09 07:55:19.718205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.183 Running I/O for 1 seconds... 00:15:19.556 67808.00 IOPS, 264.88 MiB/s 00:15:19.556 Latency(us) 00:15:19.556 [2024-10-09T07:55:21.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.556 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:19.556 nvme0n1 : 1.03 10354.80 40.45 0.00 0.00 12347.58 7566.43 28240.06 00:15:19.556 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:19.556 nvme1n1 : 1.03 15108.42 59.02 0.00 0.00 8452.90 3961.95 16324.42 00:15:19.556 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:19.556 nvme2n1 : 1.03 10264.91 40.10 0.00 0.00 12384.56 6583.39 32172.22 00:15:19.556 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:19.556 nvme2n2 : 1.04 10238.87 40.00 0.00 0.00 12408.98 6791.91 33602.09 00:15:19.556 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:19.556 nvme2n3 : 1.04 10213.34 39.90 0.00 0.00 12431.41 6911.07 35031.97 00:15:19.556 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:19.556 nvme3n1 : 1.04 10186.26 39.79 0.00 0.00 12454.55 6970.65 36700.16 00:15:19.556 [2024-10-09T07:55:21.568Z] =================================================================================================================== 00:15:19.556 [2024-10-09T07:55:21.568Z] Total : 66366.60 259.24 0.00 0.00 11508.87 3961.95 36700.16 00:15:20.490 00:15:20.490 real 0m3.197s 00:15:20.490 user 0m2.419s 00:15:20.490 sys 0m0.593s 00:15:20.490 ************************************ 00:15:20.490 END TEST bdev_write_zeroes 00:15:20.491 ************************************ 00:15:20.491 07:55:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.491 07:55:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:20.491 07:55:22 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:20.491 07:55:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:20.491 07:55:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.491 07:55:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.491 ************************************ 00:15:20.491 START TEST bdev_json_nonenclosed 00:15:20.491 ************************************ 00:15:20.491 07:55:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:20.749 [2024-10-09 07:55:22.540611] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:20.749 [2024-10-09 07:55:22.540788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72683 ] 00:15:20.749 [2024-10-09 07:55:22.716008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.006 [2024-10-09 07:55:22.958780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.006 [2024-10-09 07:55:22.958902] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:21.006 [2024-10-09 07:55:22.958937] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:21.006 [2024-10-09 07:55:22.958954] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:21.571 ************************************ 00:15:21.571 END TEST bdev_json_nonenclosed 00:15:21.571 ************************************ 00:15:21.571 00:15:21.571 real 0m1.018s 00:15:21.571 user 0m0.766s 00:15:21.571 sys 0m0.142s 00:15:21.571 07:55:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.571 07:55:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 07:55:23 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:21.571 07:55:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:21.571 07:55:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.571 07:55:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 ************************************ 00:15:21.571 START TEST bdev_json_nonarray 00:15:21.571 ************************************ 00:15:21.571 07:55:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:21.829 [2024-10-09 07:55:23.606769] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:21.829 [2024-10-09 07:55:23.607203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72714 ] 00:15:21.829 [2024-10-09 07:55:23.778776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.155 [2024-10-09 07:55:24.031285] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.155 [2024-10-09 07:55:24.031420] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:22.155 [2024-10-09 07:55:24.031452] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:22.155 [2024-10-09 07:55:24.031468] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:22.722 ************************************ 00:15:22.722 END TEST bdev_json_nonarray 00:15:22.722 ************************************ 00:15:22.722 00:15:22.722 real 0m0.949s 00:15:22.722 user 0m0.709s 00:15:22.722 sys 0m0.131s 00:15:22.722 07:55:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.722 07:55:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:22.722 07:55:24 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:22.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:23.915 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:23.915 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:23.915 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:23.915 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.173 ************************************ 00:15:24.173 END TEST blockdev_xnvme 00:15:24.173 ************************************ 00:15:24.173 00:15:24.173 real 1m4.227s 00:15:24.173 user 1m48.669s 00:15:24.173 sys 0m27.129s 00:15:24.173 07:55:25 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.173 07:55:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.173 07:55:25 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:24.173 07:55:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:24.173 07:55:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.173 07:55:25 -- common/autotest_common.sh@10 -- # set +x 00:15:24.173 ************************************ 00:15:24.173 START TEST ublk 00:15:24.173 ************************************ 00:15:24.173 07:55:25 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:24.173 * Looking for test storage... 00:15:24.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:24.173 07:55:26 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:24.173 07:55:26 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:15:24.173 07:55:26 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:24.173 07:55:26 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:24.173 07:55:26 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.173 07:55:26 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.173 07:55:26 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.173 07:55:26 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.173 07:55:26 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.173 07:55:26 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.173 07:55:26 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.173 07:55:26 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.173 07:55:26 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.174 07:55:26 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.174 07:55:26 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.174 07:55:26 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:24.174 07:55:26 ublk -- scripts/common.sh@345 -- # : 1 00:15:24.174 07:55:26 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.174 07:55:26 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.174 07:55:26 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:24.174 07:55:26 ublk -- scripts/common.sh@353 -- # local d=1 00:15:24.174 07:55:26 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.174 07:55:26 ublk -- scripts/common.sh@355 -- # echo 1 00:15:24.174 07:55:26 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.174 07:55:26 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:24.174 07:55:26 ublk -- scripts/common.sh@353 -- # local d=2 00:15:24.174 07:55:26 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.174 07:55:26 ublk -- scripts/common.sh@355 -- # echo 2 00:15:24.174 07:55:26 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.174 07:55:26 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.174 07:55:26 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.174 07:55:26 ublk -- scripts/common.sh@368 -- # return 0 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.174 --rc genhtml_branch_coverage=1 00:15:24.174 --rc genhtml_function_coverage=1 00:15:24.174 --rc genhtml_legend=1 00:15:24.174 --rc geninfo_all_blocks=1 00:15:24.174 --rc geninfo_unexecuted_blocks=1 00:15:24.174 00:15:24.174 ' 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.174 --rc genhtml_branch_coverage=1 00:15:24.174 --rc genhtml_function_coverage=1 00:15:24.174 --rc genhtml_legend=1 00:15:24.174 --rc geninfo_all_blocks=1 00:15:24.174 --rc geninfo_unexecuted_blocks=1 00:15:24.174 00:15:24.174 ' 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.174 --rc genhtml_branch_coverage=1 00:15:24.174 --rc genhtml_function_coverage=1 00:15:24.174 --rc genhtml_legend=1 00:15:24.174 --rc geninfo_all_blocks=1 00:15:24.174 --rc geninfo_unexecuted_blocks=1 00:15:24.174 00:15:24.174 ' 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.174 --rc genhtml_branch_coverage=1 00:15:24.174 --rc genhtml_function_coverage=1 00:15:24.174 --rc genhtml_legend=1 00:15:24.174 --rc geninfo_all_blocks=1 00:15:24.174 --rc geninfo_unexecuted_blocks=1 00:15:24.174 00:15:24.174 ' 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:24.174 07:55:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:24.174 07:55:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:24.174 07:55:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:24.174 07:55:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:24.174 07:55:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:24.174 07:55:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:24.174 07:55:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:24.174 07:55:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:24.174 07:55:26 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.174 07:55:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:24.174 ************************************ 00:15:24.174 START TEST test_save_ublk_config 00:15:24.174 ************************************ 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:24.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72998 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72998 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72998 ']' 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.174 07:55:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:24.432 [2024-10-09 07:55:26.338985] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:24.432 [2024-10-09 07:55:26.339221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72998 ] 00:15:24.690 [2024-10-09 07:55:26.527200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.948 [2024-10-09 07:55:26.768381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:25.887 [2024-10-09 07:55:27.572370] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:25.887 [2024-10-09 07:55:27.573483] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:25.887 malloc0 00:15:25.887 [2024-10-09 07:55:27.652506] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:25.887 [2024-10-09 07:55:27.652639] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:25.887 [2024-10-09 07:55:27.652660] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:25.887 [2024-10-09 07:55:27.652673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:25.887 [2024-10-09 07:55:27.660506] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:25.887 [2024-10-09 07:55:27.660537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:25.887 [2024-10-09 07:55:27.668377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:25.887 [2024-10-09 07:55:27.668501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:25.887 [2024-10-09 07:55:27.684366] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:25.887 0 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.887 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:26.146 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.146 07:55:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:26.146 "subsystems": [ 00:15:26.146 { 00:15:26.146 "subsystem": "fsdev", 00:15:26.146 "config": [ 00:15:26.146 { 00:15:26.146 "method": "fsdev_set_opts", 00:15:26.146 "params": { 00:15:26.146 "fsdev_io_pool_size": 65535, 00:15:26.146 "fsdev_io_cache_size": 256 00:15:26.146 } 00:15:26.146 } 00:15:26.146 ] 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "subsystem": "keyring", 00:15:26.146 "config": [] 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "subsystem": "iobuf", 00:15:26.146 "config": [ 00:15:26.146 { 00:15:26.146 "method": "iobuf_set_options", 00:15:26.146 "params": { 00:15:26.146 "small_pool_count": 8192, 00:15:26.146 "large_pool_count": 1024, 00:15:26.146 "small_bufsize": 8192, 00:15:26.146 "large_bufsize": 135168 00:15:26.146 } 00:15:26.146 } 00:15:26.146 ] 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "subsystem": "sock", 00:15:26.146 "config": [ 00:15:26.146 { 00:15:26.146 "method": "sock_set_default_impl", 00:15:26.146 "params": { 00:15:26.146 "impl_name": "posix" 00:15:26.146 } 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "method": "sock_impl_set_options", 00:15:26.146 "params": { 00:15:26.146 "impl_name": "ssl", 00:15:26.146 "recv_buf_size": 4096, 00:15:26.146 "send_buf_size": 4096, 00:15:26.146 "enable_recv_pipe": true, 00:15:26.146 "enable_quickack": false, 00:15:26.146 "enable_placement_id": 0, 00:15:26.146 "enable_zerocopy_send_server": true, 00:15:26.146 "enable_zerocopy_send_client": false, 00:15:26.146 "zerocopy_threshold": 0, 00:15:26.146 "tls_version": 0, 00:15:26.146 "enable_ktls": false 00:15:26.146 } 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "method": "sock_impl_set_options", 00:15:26.146 "params": { 00:15:26.146 "impl_name": "posix", 00:15:26.146 "recv_buf_size": 2097152, 00:15:26.146 "send_buf_size": 2097152, 00:15:26.146 "enable_recv_pipe": true, 00:15:26.146 "enable_quickack": false, 00:15:26.146 "enable_placement_id": 0, 00:15:26.146 "enable_zerocopy_send_server": true, 00:15:26.146 "enable_zerocopy_send_client": false, 00:15:26.146 "zerocopy_threshold": 0, 00:15:26.146 "tls_version": 0, 00:15:26.146 "enable_ktls": false 00:15:26.146 } 00:15:26.146 } 00:15:26.146 ] 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "subsystem": "vmd", 00:15:26.146 "config": [] 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "subsystem": "accel", 00:15:26.146 "config": [ 00:15:26.146 { 00:15:26.146 "method": "accel_set_options", 00:15:26.146 "params": { 00:15:26.146 "small_cache_size": 128, 00:15:26.146 "large_cache_size": 16, 00:15:26.146 "task_count": 2048, 00:15:26.146 "sequence_count": 2048, 00:15:26.146 "buf_count": 2048 00:15:26.146 } 00:15:26.146 } 00:15:26.146 ] 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "subsystem": "bdev", 00:15:26.146 "config": [ 00:15:26.146 { 00:15:26.146 "method": "bdev_set_options", 00:15:26.146 "params": { 00:15:26.146 "bdev_io_pool_size": 65535, 00:15:26.146 "bdev_io_cache_size": 256, 00:15:26.146 "bdev_auto_examine": true, 00:15:26.146 "iobuf_small_cache_size": 128, 00:15:26.146 "iobuf_large_cache_size": 16 00:15:26.146 } 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "method": "bdev_raid_set_options", 00:15:26.146 "params": { 00:15:26.146 "process_window_size_kb": 1024, 00:15:26.146 "process_max_bandwidth_mb_sec": 0 00:15:26.146 } 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "method": "bdev_iscsi_set_options", 00:15:26.146 "params": { 00:15:26.146 "timeout_sec": 30 00:15:26.146 } 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "method": "bdev_nvme_set_options", 00:15:26.146 "params": { 00:15:26.146 "action_on_timeout": "none", 00:15:26.146 "timeout_us": 0, 00:15:26.146 "timeout_admin_us": 0, 00:15:26.146 "keep_alive_timeout_ms": 10000, 00:15:26.146 "arbitration_burst": 0, 00:15:26.146 "low_priority_weight": 0, 00:15:26.146 "medium_priority_weight": 0, 00:15:26.146 "high_priority_weight": 0, 00:15:26.146 "nvme_adminq_poll_period_us": 10000, 00:15:26.146 "nvme_ioq_poll_period_us": 0, 00:15:26.146 "io_queue_requests": 0, 00:15:26.146 "delay_cmd_submit": true, 00:15:26.146 "transport_retry_count": 4, 00:15:26.146 "bdev_retry_count": 3, 00:15:26.146 "transport_ack_timeout": 0, 00:15:26.146 "ctrlr_loss_timeout_sec": 0, 00:15:26.146 "reconnect_delay_sec": 0, 00:15:26.146 "fast_io_fail_timeout_sec": 0, 00:15:26.146 "disable_auto_failback": false, 00:15:26.146 "generate_uuids": false, 00:15:26.146 "transport_tos": 0, 00:15:26.146 "nvme_error_stat": false, 00:15:26.146 "rdma_srq_size": 0, 00:15:26.146 "io_path_stat": false, 00:15:26.146 "allow_accel_sequence": false, 00:15:26.146 "rdma_max_cq_size": 0, 00:15:26.146 "rdma_cm_event_timeout_ms": 0, 00:15:26.146 "dhchap_digests": [ 00:15:26.146 "sha256", 00:15:26.146 "sha384", 00:15:26.146 "sha512" 00:15:26.146 ], 00:15:26.146 "dhchap_dhgroups": [ 00:15:26.146 "null", 00:15:26.146 "ffdhe2048", 00:15:26.146 "ffdhe3072", 00:15:26.146 "ffdhe4096", 00:15:26.146 "ffdhe6144", 00:15:26.146 "ffdhe8192" 00:15:26.146 ] 00:15:26.146 } 00:15:26.146 }, 00:15:26.146 { 00:15:26.146 "method": "bdev_nvme_set_hotplug", 00:15:26.146 "params": { 00:15:26.146 "period_us": 100000, 00:15:26.146 "enable": false 00:15:26.146 } 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "method": "bdev_malloc_create", 00:15:26.147 "params": { 00:15:26.147 "name": "malloc0", 00:15:26.147 "num_blocks": 8192, 00:15:26.147 "block_size": 4096, 00:15:26.147 "physical_block_size": 4096, 00:15:26.147 "uuid": "0ffb4d4a-d43e-4cab-9545-d7179a6ab41b", 00:15:26.147 "optimal_io_boundary": 0, 00:15:26.147 "md_size": 0, 00:15:26.147 "dif_type": 0, 00:15:26.147 "dif_is_head_of_md": false, 00:15:26.147 "dif_pi_format": 0 00:15:26.147 } 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "method": "bdev_wait_for_examine" 00:15:26.147 } 00:15:26.147 ] 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "scsi", 00:15:26.147 "config": null 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "scheduler", 00:15:26.147 "config": [ 00:15:26.147 { 00:15:26.147 "method": "framework_set_scheduler", 00:15:26.147 "params": { 00:15:26.147 "name": "static" 00:15:26.147 } 00:15:26.147 } 00:15:26.147 ] 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "vhost_scsi", 00:15:26.147 "config": [] 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "vhost_blk", 00:15:26.147 "config": [] 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "ublk", 00:15:26.147 "config": [ 00:15:26.147 { 00:15:26.147 "method": "ublk_create_target", 00:15:26.147 "params": { 00:15:26.147 "cpumask": "1" 00:15:26.147 } 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "method": "ublk_start_disk", 00:15:26.147 "params": { 00:15:26.147 "bdev_name": "malloc0", 00:15:26.147 "ublk_id": 0, 00:15:26.147 "num_queues": 1, 00:15:26.147 "queue_depth": 128 00:15:26.147 } 00:15:26.147 } 00:15:26.147 ] 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "nbd", 00:15:26.147 "config": [] 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "nvmf", 00:15:26.147 "config": [ 00:15:26.147 { 00:15:26.147 "method": "nvmf_set_config", 00:15:26.147 "params": { 00:15:26.147 "discovery_filter": "match_any", 00:15:26.147 "admin_cmd_passthru": { 00:15:26.147 "identify_ctrlr": false 00:15:26.147 }, 00:15:26.147 "dhchap_digests": [ 00:15:26.147 "sha256", 00:15:26.147 "sha384", 00:15:26.147 "sha512" 00:15:26.147 ], 00:15:26.147 "dhchap_dhgroups": [ 00:15:26.147 "null", 00:15:26.147 "ffdhe2048", 00:15:26.147 "ffdhe3072", 00:15:26.147 "ffdhe4096", 00:15:26.147 "ffdhe6144", 00:15:26.147 "ffdhe8192" 00:15:26.147 ] 00:15:26.147 } 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "method": "nvmf_set_max_subsystems", 00:15:26.147 "params": { 00:15:26.147 "max_subsystems": 1024 00:15:26.147 } 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "method": "nvmf_set_crdt", 00:15:26.147 "params": { 00:15:26.147 "crdt1": 0, 00:15:26.147 "crdt2": 0, 00:15:26.147 "crdt3": 0 00:15:26.147 } 00:15:26.147 } 00:15:26.147 ] 00:15:26.147 }, 00:15:26.147 { 00:15:26.147 "subsystem": "iscsi", 00:15:26.147 "config": [ 00:15:26.147 { 00:15:26.147 "method": "iscsi_set_options", 00:15:26.147 "params": { 00:15:26.147 "node_base": "iqn.2016-06.io.spdk", 00:15:26.147 "max_sessions": 128, 00:15:26.147 "max_connections_per_session": 2, 00:15:26.147 "max_queue_depth": 64, 00:15:26.147 "default_time2wait": 2, 00:15:26.147 "default_time2retain": 20, 00:15:26.147 "first_burst_length": 8192, 00:15:26.147 "immediate_data": true, 00:15:26.147 "allow_duplicated_isid": false, 00:15:26.147 "error_recovery_level": 0, 00:15:26.147 "nop_timeout": 60, 00:15:26.147 "nop_in_interval": 30, 00:15:26.147 "disable_chap": false, 00:15:26.147 "require_chap": false, 00:15:26.147 "mutual_chap": false, 00:15:26.147 "chap_group": 0, 00:15:26.147 "max_large_datain_per_connection": 64, 00:15:26.147 "max_r2t_per_connection": 4, 00:15:26.147 "pdu_pool_size": 36864, 00:15:26.147 "immediate_data_pool_size": 16384, 00:15:26.147 "data_out_pool_size": 2048 00:15:26.147 } 00:15:26.147 } 00:15:26.147 ] 00:15:26.147 } 00:15:26.147 ] 00:15:26.147 }' 00:15:26.147 07:55:27 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72998 00:15:26.147 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72998 ']' 00:15:26.147 07:55:27 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72998 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72998 00:15:26.147 killing process with pid 72998 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72998' 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72998 00:15:26.147 07:55:28 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72998 00:15:27.523 [2024-10-09 07:55:29.306283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:27.523 [2024-10-09 07:55:29.350507] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:27.523 [2024-10-09 07:55:29.350704] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:27.523 [2024-10-09 07:55:29.359381] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:27.523 [2024-10-09 07:55:29.359448] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:27.523 [2024-10-09 07:55:29.359466] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:27.523 [2024-10-09 07:55:29.359511] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:27.523 [2024-10-09 07:55:29.359693] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73068 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73068 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 73068 ']' 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.422 07:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:29.422 "subsystems": [ 00:15:29.422 { 00:15:29.422 "subsystem": "fsdev", 00:15:29.422 "config": [ 00:15:29.422 { 00:15:29.422 "method": "fsdev_set_opts", 00:15:29.422 "params": { 00:15:29.422 "fsdev_io_pool_size": 65535, 00:15:29.422 "fsdev_io_cache_size": 256 00:15:29.422 } 00:15:29.422 } 00:15:29.422 ] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "keyring", 00:15:29.422 "config": [] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "iobuf", 00:15:29.422 "config": [ 00:15:29.422 { 00:15:29.422 "method": "iobuf_set_options", 00:15:29.422 "params": { 00:15:29.422 "small_pool_count": 8192, 00:15:29.422 "large_pool_count": 1024, 00:15:29.422 "small_bufsize": 8192, 00:15:29.422 "large_bufsize": 135168 00:15:29.422 } 00:15:29.422 } 00:15:29.422 ] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "sock", 00:15:29.422 "config": [ 00:15:29.422 { 00:15:29.422 "method": "sock_set_default_impl", 00:15:29.422 "params": { 00:15:29.422 "impl_name": "posix" 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "sock_impl_set_options", 00:15:29.422 "params": { 00:15:29.422 "impl_name": "ssl", 00:15:29.422 "recv_buf_size": 4096, 00:15:29.422 "send_buf_size": 4096, 00:15:29.422 "enable_recv_pipe": true, 00:15:29.422 "enable_quickack": false, 00:15:29.422 "enable_placement_id": 0, 00:15:29.422 "enable_zerocopy_send_server": true, 00:15:29.422 "enable_zerocopy_send_client": false, 00:15:29.422 "zerocopy_threshold": 0, 00:15:29.422 "tls_version": 0, 00:15:29.422 "enable_ktls": false 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "sock_impl_set_options", 00:15:29.422 "params": { 00:15:29.422 "impl_name": "posix", 00:15:29.422 "recv_buf_size": 2097152, 00:15:29.422 "send_buf_size": 2097152, 00:15:29.422 "enable_recv_pipe": true, 00:15:29.422 "enable_quickack": false, 00:15:29.422 "enable_placement_id": 0, 00:15:29.422 "enable_zerocopy_send_server": true, 00:15:29.422 "enable_zerocopy_send_client": false, 00:15:29.422 "zerocopy_threshold": 0, 00:15:29.422 "tls_version": 0, 00:15:29.422 "enable_ktls": false 00:15:29.422 } 00:15:29.422 } 00:15:29.422 ] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "vmd", 00:15:29.422 "config": [] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "accel", 00:15:29.422 "config": [ 00:15:29.422 { 00:15:29.422 "method": "accel_set_options", 00:15:29.422 "params": { 00:15:29.422 "small_cache_size": 128, 00:15:29.422 "large_cache_size": 16, 00:15:29.422 "task_count": 2048, 00:15:29.422 "sequence_count": 2048, 00:15:29.422 "buf_count": 2048 00:15:29.422 } 00:15:29.422 } 00:15:29.422 ] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "bdev", 00:15:29.422 "config": [ 00:15:29.422 { 00:15:29.422 "method": "bdev_set_options", 00:15:29.422 "params": { 00:15:29.422 "bdev_io_pool_size": 65535, 00:15:29.422 "bdev_io_cache_size": 256, 00:15:29.422 "bdev_auto_examine": true, 00:15:29.422 "iobuf_small_cache_size": 128, 00:15:29.422 "iobuf_large_cache_size": 16 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "bdev_raid_set_options", 00:15:29.422 "params": { 00:15:29.422 "process_window_size_kb": 1024, 00:15:29.422 "process_max_bandwidth_mb_sec": 0 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "bdev_iscsi_set_options", 00:15:29.422 "params": { 00:15:29.422 "timeout_sec": 30 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "bdev_nvme_set_options", 00:15:29.422 "params": { 00:15:29.422 "action_on_timeout": "none", 00:15:29.422 "timeout_us": 0, 00:15:29.422 "timeout_admin_us": 0, 00:15:29.422 "keep_alive_timeout_ms": 10000, 00:15:29.422 "arbitration_burst": 0, 00:15:29.422 "low_priority_weight": 0, 00:15:29.422 "medium_priority_weight": 0, 00:15:29.422 "high_priority_weight": 0, 00:15:29.422 "nvme_adminq_poll_period_us": 10000, 00:15:29.422 "nvme_ioq_poll_period_us": 0, 00:15:29.422 "io_queue_requests": 0, 00:15:29.422 "delay_cmd_submit": true, 00:15:29.422 "transport_retry_count": 4, 00:15:29.422 "bdev_retry_count": 3, 00:15:29.422 "transport_ack_timeout": 0, 00:15:29.422 "ctrlr_loss_timeout_sec": 0, 00:15:29.422 "reconnect_delay_sec": 0, 00:15:29.422 "fast_io_fail_timeout_sec": 0, 00:15:29.422 "disable_auto_failback": false, 00:15:29.422 "generate_uuids": false, 00:15:29.422 "transport_tos": 0, 00:15:29.422 "nvme_error_stat": false, 00:15:29.422 "rdma_srq_size": 0, 00:15:29.422 "io_path_stat": false, 00:15:29.422 "allow_accel_sequence": false, 00:15:29.422 "rdma_max_cq_size": 0, 00:15:29.422 "rdma_cm_event_timeout_ms": 0, 00:15:29.422 "dhchap_digests": [ 00:15:29.422 "sha256", 00:15:29.422 "sha384", 00:15:29.422 "sha512" 00:15:29.422 ], 00:15:29.422 "dhchap_dhgroups": [ 00:15:29.422 "null", 00:15:29.422 "ffdhe2048", 00:15:29.422 "ffdhe3072", 00:15:29.422 "ffdhe4096", 00:15:29.422 "ffdhe6144", 00:15:29.422 "ffdhe8192" 00:15:29.422 ] 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "bdev_nvme_set_hotplug", 00:15:29.422 "params": { 00:15:29.422 "period_us": 100000, 00:15:29.422 "enable": false 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "bdev_malloc_create", 00:15:29.422 "params": { 00:15:29.422 "name": "malloc0", 00:15:29.422 "num_blocks": 8192, 00:15:29.422 "block_size": 4096, 00:15:29.422 "physical_block_size": 4096, 00:15:29.422 "uuid": "0ffb4d4a-d43e-4cab-9545-d7179a6ab41b", 00:15:29.422 "optimal_io_boundary": 0, 00:15:29.422 "md_size": 0, 00:15:29.422 "dif_type": 0, 00:15:29.422 "dif_is_head_of_md": false, 00:15:29.422 "dif_pi_format": 0 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "bdev_wait_for_examine" 00:15:29.422 } 00:15:29.422 ] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "scsi", 00:15:29.422 "config": null 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "scheduler", 00:15:29.422 "config": [ 00:15:29.422 { 00:15:29.422 "method": "framework_set_scheduler", 00:15:29.422 "params": { 00:15:29.422 "name": "static" 00:15:29.422 } 00:15:29.422 } 00:15:29.422 ] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "vhost_scsi", 00:15:29.422 "config": [] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "vhost_blk", 00:15:29.422 "config": [] 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "subsystem": "ublk", 00:15:29.422 "config": [ 00:15:29.422 { 00:15:29.422 "method": "ublk_create_target", 00:15:29.422 "params": { 00:15:29.422 "cpumask": "1" 00:15:29.422 } 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "method": "ublk_start_disk", 00:15:29.423 "params": { 00:15:29.423 "bdev_name": "malloc0", 00:15:29.423 "ublk_id": 0, 00:15:29.423 "num_queues": 1, 00:15:29.423 "queue_depth": 128 00:15:29.423 } 00:15:29.423 } 00:15:29.423 ] 00:15:29.423 }, 00:15:29.423 { 00:15:29.423 "subsystem": "nbd", 00:15:29.423 "config": [] 00:15:29.423 }, 00:15:29.423 { 00:15:29.423 "subsystem": "nvmf", 00:15:29.423 "config": [ 00:15:29.423 { 00:15:29.423 "method": "nvmf_set_config", 00:15:29.423 "params": { 00:15:29.423 "discovery_filter": "match_any", 00:15:29.423 "admin_cmd_passthru": { 00:15:29.423 "identify_ctrlr": false 00:15:29.423 }, 00:15:29.423 "dhchap_digests": [ 00:15:29.423 "sha256", 00:15:29.423 "sha384", 00:15:29.423 "sha512" 00:15:29.423 ], 00:15:29.423 "dhchap_dhgroups": [ 00:15:29.423 "null", 00:15:29.423 "ffdhe2048", 00:15:29.423 "ffdhe3072", 00:15:29.423 "ffdhe4096", 00:15:29.423 "ffdhe6144", 00:15:29.423 "ffdhe8192" 00:15:29.423 ] 00:15:29.423 } 00:15:29.423 }, 00:15:29.423 { 00:15:29.423 "method": "nvmf_set_max_subsystems", 00:15:29.423 "params": { 00:15:29.423 "max_subsystems": 1024 00:15:29.423 } 00:15:29.423 }, 00:15:29.423 { 00:15:29.423 "method": "nvmf_set_crdt", 00:15:29.423 "params": { 00:15:29.423 "crdt1": 0, 00:15:29.423 "crdt2": 0, 00:15:29.423 "crdt3": 0 00:15:29.423 } 00:15:29.423 } 00:15:29.423 ] 00:15:29.423 }, 00:15:29.423 { 00:15:29.423 "subsystem": "iscsi", 00:15:29.423 "config": [ 00:15:29.423 { 00:15:29.423 "method": "iscsi_set_options", 00:15:29.423 "params": { 00:15:29.423 "node_base": "iqn.2016-06.io.spdk", 00:15:29.423 "max_sessions": 128, 00:15:29.423 "max_connections_per_session": 2, 00:15:29.423 "max_queue_depth": 64, 00:15:29.423 "default_time2wait": 2, 00:15:29.423 "default_time2retain": 20, 00:15:29.423 "first_burst_length": 8192, 00:15:29.423 "immediate_data": true, 00:15:29.423 "allow_duplicated_isid": false, 00:15:29.423 "error_recovery_level": 0, 00:15:29.423 "nop_timeout": 60, 00:15:29.423 "nop_in_interval": 30, 00:15:29.423 "disable_chap": false, 00:15:29.423 "require_chap": false, 00:15:29.423 "mutual_chap": false, 00:15:29.423 "chap_group": 0, 00:15:29.423 "max_large_datain_per_connection": 64, 00:15:29.423 "max_r2t_per_connection": 4, 00:15:29.423 "pdu_pool_size": 36864, 00:15:29.423 "immediate_data_pool_size": 16384, 00:15:29.423 "data_out_pool_size": 2048 00:15:29.423 } 00:15:29.423 } 00:15:29.423 ] 00:15:29.423 } 00:15:29.423 ] 00:15:29.423 }' 00:15:29.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.423 07:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.423 07:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:29.423 [2024-10-09 07:55:31.337137] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:29.423 [2024-10-09 07:55:31.337525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73068 ] 00:15:29.681 [2024-10-09 07:55:31.501060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.938 [2024-10-09 07:55:31.702575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.874 [2024-10-09 07:55:32.627365] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:30.874 [2024-10-09 07:55:32.628529] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:30.874 [2024-10-09 07:55:32.635635] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:30.874 [2024-10-09 07:55:32.635795] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:30.874 [2024-10-09 07:55:32.635812] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:30.874 [2024-10-09 07:55:32.635822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:30.874 [2024-10-09 07:55:32.643611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:30.874 [2024-10-09 07:55:32.643818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:30.874 [2024-10-09 07:55:32.651421] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:30.874 [2024-10-09 07:55:32.651724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:30.874 [2024-10-09 07:55:32.668386] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73068 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 73068 ']' 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 73068 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73068 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73068' 00:15:30.874 killing process with pid 73068 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 73068 00:15:30.874 07:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 73068 00:15:32.250 [2024-10-09 07:55:34.211189] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:32.250 [2024-10-09 07:55:34.240553] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:32.250 [2024-10-09 07:55:34.240763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:32.250 [2024-10-09 07:55:34.248439] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:32.250 [2024-10-09 07:55:34.248549] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:32.250 [2024-10-09 07:55:34.248572] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:32.250 [2024-10-09 07:55:34.248642] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.250 [2024-10-09 07:55:34.248891] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:34.150 07:55:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:34.150 ************************************ 00:15:34.150 END TEST test_save_ublk_config 00:15:34.150 ************************************ 00:15:34.150 00:15:34.150 real 0m9.944s 00:15:34.150 user 0m8.050s 00:15:34.150 sys 0m2.996s 00:15:34.150 07:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.150 07:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:34.150 07:55:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73158 00:15:34.150 07:55:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:34.150 07:55:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:34.150 07:55:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73158 00:15:34.150 07:55:36 ublk -- common/autotest_common.sh@831 -- # '[' -z 73158 ']' 00:15:34.150 07:55:36 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.150 07:55:36 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.150 07:55:36 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.150 07:55:36 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.150 07:55:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:34.408 [2024-10-09 07:55:36.250222] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:34.408 [2024-10-09 07:55:36.250631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73158 ] 00:15:34.408 [2024-10-09 07:55:36.415454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:34.664 [2024-10-09 07:55:36.606764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.664 [2024-10-09 07:55:36.606768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.598 07:55:37 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.598 07:55:37 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:35.598 07:55:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:35.598 07:55:37 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:35.598 07:55:37 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:35.598 07:55:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:35.598 ************************************ 00:15:35.598 START TEST test_create_ublk 00:15:35.598 ************************************ 00:15:35.598 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:35.598 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:35.598 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.598 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:35.598 [2024-10-09 07:55:37.394361] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:35.598 [2024-10-09 07:55:37.396439] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:35.598 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.598 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:35.598 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:35.598 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.598 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:35.858 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:35.858 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.858 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:35.858 [2024-10-09 07:55:37.649535] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:35.858 [2024-10-09 07:55:37.650017] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:35.858 [2024-10-09 07:55:37.650036] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:35.858 [2024-10-09 07:55:37.650047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:35.858 [2024-10-09 07:55:37.657567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:35.858 [2024-10-09 07:55:37.657594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:35.858 [2024-10-09 07:55:37.664381] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:35.858 [2024-10-09 07:55:37.665086] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:35.858 [2024-10-09 07:55:37.675477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:35.858 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:35.858 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.858 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:35.858 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:35.858 { 00:15:35.858 "ublk_device": "/dev/ublkb0", 00:15:35.858 "id": 0, 00:15:35.858 "queue_depth": 512, 00:15:35.858 "num_queues": 4, 00:15:35.858 "bdev_name": "Malloc0" 00:15:35.858 } 00:15:35.858 ]' 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:35.858 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:36.116 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:36.116 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:36.116 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:36.116 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:36.116 07:55:37 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:36.116 fio: verification read phase will never start because write phase uses all of runtime 00:15:36.116 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:36.116 fio-3.35 00:15:36.116 Starting 1 process 00:15:48.374 00:15:48.374 fio_test: (groupid=0, jobs=1): err= 0: pid=73207: Wed Oct 9 07:55:48 2024 00:15:48.374 write: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(411MiB/10001msec); 0 zone resets 00:15:48.374 clat (usec): min=67, max=7945, avg=93.37, stdev=160.26 00:15:48.374 lat (usec): min=68, max=7969, avg=94.23, stdev=160.31 00:15:48.374 clat percentiles (usec): 00:15:48.374 | 1.00th=[ 76], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:15:48.374 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 81], 60.00th=[ 83], 00:15:48.374 | 70.00th=[ 85], 80.00th=[ 89], 90.00th=[ 95], 95.00th=[ 103], 00:15:48.374 | 99.00th=[ 135], 99.50th=[ 159], 99.90th=[ 3326], 99.95th=[ 3589], 00:15:48.374 | 99.99th=[ 3785] 00:15:48.374 bw ( KiB/s): min=18730, max=44856, per=99.97%, avg=42089.37, stdev=5876.71, samples=19 00:15:48.374 iops : min= 4682, max=11214, avg=10522.32, stdev=1469.29, samples=19 00:15:48.374 lat (usec) : 100=94.03%, 250=5.56%, 500=0.01%, 750=0.02%, 1000=0.03% 00:15:48.374 lat (msec) : 2=0.10%, 4=0.24%, 10=0.01% 00:15:48.374 cpu : usr=3.05%, sys=7.95%, ctx=105265, majf=0, minf=797 00:15:48.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:48.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.374 issued rwts: total=0,105262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:48.374 00:15:48.374 Run status group 0 (all jobs): 00:15:48.374 WRITE: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=411MiB (431MB), run=10001-10001msec 00:15:48.374 00:15:48.374 Disk stats (read/write): 00:15:48.374 ublkb0: ios=0/104226, merge=0/0, ticks=0/8851, in_queue=8852, util=99.05% 00:15:48.374 07:55:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 [2024-10-09 07:55:48.193676] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:48.374 [2024-10-09 07:55:48.237849] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:48.374 [2024-10-09 07:55:48.238859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:48.374 [2024-10-09 07:55:48.247370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:48.374 [2024-10-09 07:55:48.247714] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:48.374 [2024-10-09 07:55:48.247745] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.374 07:55:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 [2024-10-09 07:55:48.263474] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:48.374 request: 00:15:48.374 { 00:15:48.374 "ublk_id": 0, 00:15:48.374 "method": "ublk_stop_disk", 00:15:48.374 "req_id": 1 00:15:48.374 } 00:15:48.374 Got JSON-RPC error response 00:15:48.374 response: 00:15:48.374 { 00:15:48.374 "code": -19, 00:15:48.374 "message": "No such device" 00:15:48.374 } 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.374 07:55:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 [2024-10-09 07:55:48.278476] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:48.374 [2024-10-09 07:55:48.281364] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:48.374 [2024-10-09 07:55:48.281416] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.374 07:55:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.374 07:55:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:48.374 07:55:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.374 07:55:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:48.374 07:55:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:48.374 07:55:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:48.374 07:55:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 07:55:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.374 07:55:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:48.374 07:55:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:48.374 ************************************ 00:15:48.374 END TEST test_create_ublk 00:15:48.374 ************************************ 00:15:48.374 07:55:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:48.374 00:15:48.374 real 0m11.639s 00:15:48.374 user 0m0.738s 00:15:48.374 sys 0m0.894s 00:15:48.374 07:55:49 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.374 07:55:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 07:55:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:48.374 07:55:49 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:48.374 07:55:49 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.374 07:55:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 ************************************ 00:15:48.374 START TEST test_create_multi_ublk 00:15:48.374 ************************************ 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 [2024-10-09 07:55:49.088357] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:48.374 [2024-10-09 07:55:49.090313] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:48.374 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 [2024-10-09 07:55:49.420521] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:48.375 [2024-10-09 07:55:49.421005] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:48.375 [2024-10-09 07:55:49.421021] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:48.375 [2024-10-09 07:55:49.421035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:48.375 [2024-10-09 07:55:49.428383] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:48.375 [2024-10-09 07:55:49.428415] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:48.375 [2024-10-09 07:55:49.436367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:48.375 [2024-10-09 07:55:49.437104] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:48.375 [2024-10-09 07:55:49.467369] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 [2024-10-09 07:55:49.723543] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:48.375 [2024-10-09 07:55:49.724024] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:48.375 [2024-10-09 07:55:49.724058] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:48.375 [2024-10-09 07:55:49.724080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:48.375 [2024-10-09 07:55:49.731397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:48.375 [2024-10-09 07:55:49.731430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:48.375 [2024-10-09 07:55:49.739370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:48.375 [2024-10-09 07:55:49.740122] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:48.375 [2024-10-09 07:55:49.743296] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 [2024-10-09 07:55:49.999551] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:48.375 [2024-10-09 07:55:50.000051] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:48.375 [2024-10-09 07:55:50.000074] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:48.375 [2024-10-09 07:55:50.000087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:48.375 [2024-10-09 07:55:50.008588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:48.375 [2024-10-09 07:55:50.008629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:48.375 [2024-10-09 07:55:50.015400] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:48.375 [2024-10-09 07:55:50.016196] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:48.375 [2024-10-09 07:55:50.019415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 [2024-10-09 07:55:50.268523] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:48.375 [2024-10-09 07:55:50.268997] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:48.375 [2024-10-09 07:55:50.269023] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:48.375 [2024-10-09 07:55:50.269034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:48.375 [2024-10-09 07:55:50.276391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:48.375 [2024-10-09 07:55:50.276420] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:48.375 [2024-10-09 07:55:50.284377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:48.375 [2024-10-09 07:55:50.285085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:48.375 [2024-10-09 07:55:50.290263] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:48.375 { 00:15:48.375 "ublk_device": "/dev/ublkb0", 00:15:48.375 "id": 0, 00:15:48.375 "queue_depth": 512, 00:15:48.375 "num_queues": 4, 00:15:48.375 "bdev_name": "Malloc0" 00:15:48.375 }, 00:15:48.375 { 00:15:48.375 "ublk_device": "/dev/ublkb1", 00:15:48.375 "id": 1, 00:15:48.375 "queue_depth": 512, 00:15:48.375 "num_queues": 4, 00:15:48.375 "bdev_name": "Malloc1" 00:15:48.375 }, 00:15:48.375 { 00:15:48.375 "ublk_device": "/dev/ublkb2", 00:15:48.375 "id": 2, 00:15:48.375 "queue_depth": 512, 00:15:48.375 "num_queues": 4, 00:15:48.375 "bdev_name": "Malloc2" 00:15:48.375 }, 00:15:48.375 { 00:15:48.375 "ublk_device": "/dev/ublkb3", 00:15:48.375 "id": 3, 00:15:48.375 "queue_depth": 512, 00:15:48.375 "num_queues": 4, 00:15:48.375 "bdev_name": "Malloc3" 00:15:48.375 } 00:15:48.375 ]' 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:48.375 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:48.634 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:48.893 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:48.893 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:48.893 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:48.894 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:49.153 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:49.153 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:49.153 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:49.153 07:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:49.153 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.411 [2024-10-09 07:55:51.297524] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:49.411 [2024-10-09 07:55:51.336428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:49.411 [2024-10-09 07:55:51.337499] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:49.411 [2024-10-09 07:55:51.344419] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:49.411 [2024-10-09 07:55:51.344825] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:49.411 [2024-10-09 07:55:51.344876] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.411 [2024-10-09 07:55:51.360512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:49.411 [2024-10-09 07:55:51.400474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:49.411 [2024-10-09 07:55:51.401790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:49.411 [2024-10-09 07:55:51.409442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:49.411 [2024-10-09 07:55:51.409829] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:49.411 [2024-10-09 07:55:51.409861] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.411 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.709 [2024-10-09 07:55:51.424516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:49.709 [2024-10-09 07:55:51.460440] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:49.709 [2024-10-09 07:55:51.461436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:49.709 [2024-10-09 07:55:51.468398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:49.709 [2024-10-09 07:55:51.468751] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:49.709 [2024-10-09 07:55:51.468777] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:49.709 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.709 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:49.709 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:49.709 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.709 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.709 [2024-10-09 07:55:51.484485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:49.709 [2024-10-09 07:55:51.513831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:49.709 [2024-10-09 07:55:51.514870] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:49.709 [2024-10-09 07:55:51.524388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:49.709 [2024-10-09 07:55:51.524711] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:49.709 [2024-10-09 07:55:51.524737] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:49.709 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.709 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:49.968 [2024-10-09 07:55:51.780491] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:49.968 [2024-10-09 07:55:51.783403] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:49.968 [2024-10-09 07:55:51.783452] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:49.968 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:49.968 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:49.968 07:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:49.968 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.968 07:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.534 07:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.534 07:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:50.534 07:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:50.534 07:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.534 07:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.102 07:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.102 07:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.102 07:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:51.102 07:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.102 07:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.361 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.361 07:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.361 07:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:51.361 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.361 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:51.620 00:15:51.620 real 0m4.469s 00:15:51.620 user 0m1.237s 00:15:51.620 sys 0m0.157s 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.620 07:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.620 ************************************ 00:15:51.620 END TEST test_create_multi_ublk 00:15:51.620 ************************************ 00:15:51.620 07:55:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:51.620 07:55:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:51.620 07:55:53 ublk -- ublk/ublk.sh@130 -- # killprocess 73158 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@950 -- # '[' -z 73158 ']' 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@954 -- # kill -0 73158 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@955 -- # uname 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73158 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.620 killing process with pid 73158 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73158' 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@969 -- # kill 73158 00:15:51.620 07:55:53 ublk -- common/autotest_common.sh@974 -- # wait 73158 00:15:52.995 [2024-10-09 07:55:54.585562] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:52.995 [2024-10-09 07:55:54.585640] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:53.928 00:15:53.928 real 0m29.861s 00:15:53.928 user 0m43.729s 00:15:53.928 sys 0m9.433s 00:15:53.928 07:55:55 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.928 ************************************ 00:15:53.928 07:55:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.928 END TEST ublk 00:15:53.928 ************************************ 00:15:53.928 07:55:55 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:53.929 07:55:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:53.929 07:55:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.929 07:55:55 -- common/autotest_common.sh@10 -- # set +x 00:15:53.929 ************************************ 00:15:53.929 START TEST ublk_recovery 00:15:53.929 ************************************ 00:15:53.929 07:55:55 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:53.929 * Looking for test storage... 00:15:54.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:54.186 07:55:55 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.186 07:55:55 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.186 07:55:55 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.186 07:55:56 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.186 --rc genhtml_branch_coverage=1 00:15:54.186 --rc genhtml_function_coverage=1 00:15:54.186 --rc genhtml_legend=1 00:15:54.186 --rc geninfo_all_blocks=1 00:15:54.186 --rc geninfo_unexecuted_blocks=1 00:15:54.186 00:15:54.186 ' 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.186 --rc genhtml_branch_coverage=1 00:15:54.186 --rc genhtml_function_coverage=1 00:15:54.186 --rc genhtml_legend=1 00:15:54.186 --rc geninfo_all_blocks=1 00:15:54.186 --rc geninfo_unexecuted_blocks=1 00:15:54.186 00:15:54.186 ' 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.186 --rc genhtml_branch_coverage=1 00:15:54.186 --rc genhtml_function_coverage=1 00:15:54.186 --rc genhtml_legend=1 00:15:54.186 --rc geninfo_all_blocks=1 00:15:54.186 --rc geninfo_unexecuted_blocks=1 00:15:54.186 00:15:54.186 ' 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.186 --rc genhtml_branch_coverage=1 00:15:54.186 --rc genhtml_function_coverage=1 00:15:54.186 --rc genhtml_legend=1 00:15:54.186 --rc geninfo_all_blocks=1 00:15:54.186 --rc geninfo_unexecuted_blocks=1 00:15:54.186 00:15:54.186 ' 00:15:54.186 07:55:56 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:54.186 07:55:56 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:54.186 07:55:56 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:54.186 07:55:56 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73583 00:15:54.186 07:55:56 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:54.186 07:55:56 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73583 00:15:54.186 07:55:56 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73583 ']' 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.186 07:55:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.186 [2024-10-09 07:55:56.140140] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:15:54.186 [2024-10-09 07:55:56.140299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73583 ] 00:15:54.443 [2024-10-09 07:55:56.302509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.701 [2024-10-09 07:55:56.537726] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.701 [2024-10-09 07:55:56.537726] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:55.635 07:55:57 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.635 [2024-10-09 07:55:57.412363] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:55.635 [2024-10-09 07:55:57.414545] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.635 07:55:57 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.635 malloc0 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.635 07:55:57 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.635 [2024-10-09 07:55:57.548573] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:55.635 [2024-10-09 07:55:57.548713] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:55.635 [2024-10-09 07:55:57.548735] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:55.635 [2024-10-09 07:55:57.548746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:55.635 [2024-10-09 07:55:57.556423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:55.635 [2024-10-09 07:55:57.556478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:55.635 [2024-10-09 07:55:57.564438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:55.635 [2024-10-09 07:55:57.564678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:55.635 [2024-10-09 07:55:57.587422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:55.635 1 00:15:55.635 07:55:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.635 07:55:57 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:57.009 07:55:58 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73620 00:15:57.009 07:55:58 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:57.009 07:55:58 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:57.009 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.009 fio-3.35 00:15:57.009 Starting 1 process 00:16:02.323 07:56:03 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73583 00:16:02.323 07:56:03 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:07.584 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73583 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:07.584 07:56:08 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73728 00:16:07.584 07:56:08 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:07.584 07:56:08 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:07.584 07:56:08 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73728 00:16:07.584 07:56:08 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73728 ']' 00:16:07.584 07:56:08 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.584 07:56:08 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.584 07:56:08 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.584 07:56:08 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.584 07:56:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.584 [2024-10-09 07:56:08.730964] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:16:07.584 [2024-10-09 07:56:08.731179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73728 ] 00:16:07.584 [2024-10-09 07:56:08.911291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.584 [2024-10-09 07:56:09.150426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.584 [2024-10-09 07:56:09.150437] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.150 07:56:09 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.150 07:56:09 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:16:08.150 07:56:09 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:08.150 07:56:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.150 07:56:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.150 [2024-10-09 07:56:09.942362] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:08.150 [2024-10-09 07:56:09.944454] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:08.150 07:56:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.151 07:56:09 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:08.151 07:56:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.151 07:56:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.151 malloc0 00:16:08.151 07:56:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.151 07:56:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:08.151 07:56:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.151 07:56:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.151 [2024-10-09 07:56:10.086547] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:08.151 [2024-10-09 07:56:10.086605] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:08.151 [2024-10-09 07:56:10.086636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:08.151 [2024-10-09 07:56:10.094419] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:08.151 [2024-10-09 07:56:10.094456] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:08.151 [2024-10-09 07:56:10.094470] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:08.151 [2024-10-09 07:56:10.094574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:08.151 1 00:16:08.151 07:56:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.151 07:56:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73620 00:16:08.151 [2024-10-09 07:56:10.102370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:08.151 [2024-10-09 07:56:10.109958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:08.151 [2024-10-09 07:56:10.117566] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:08.151 [2024-10-09 07:56:10.117602] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:04.409 00:17:04.409 fio_test: (groupid=0, jobs=1): err= 0: pid=73623: Wed Oct 9 07:56:58 2024 00:17:04.409 read: IOPS=17.8k, BW=69.7MiB/s (73.1MB/s)(4183MiB/60002msec) 00:17:04.409 slat (nsec): min=1761, max=784465, avg=6485.75, stdev=2987.89 00:17:04.409 clat (usec): min=1104, max=6524.8k, avg=3473.53, stdev=46726.25 00:17:04.409 lat (usec): min=1139, max=6524.8k, avg=3480.01, stdev=46726.24 00:17:04.409 clat percentiles (usec): 00:17:04.409 | 1.00th=[ 2540], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2900], 00:17:04.409 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:17:04.409 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3523], 95.00th=[ 4178], 00:17:04.409 | 99.00th=[ 5735], 99.50th=[ 6521], 99.90th=[ 7570], 99.95th=[ 8717], 00:17:04.409 | 99.99th=[13173] 00:17:04.409 bw ( KiB/s): min=17392, max=83928, per=100.00%, avg=79383.55, stdev=8931.94, samples=107 00:17:04.409 iops : min= 4348, max=20982, avg=19845.88, stdev=2232.98, samples=107 00:17:04.409 write: IOPS=17.8k, BW=69.7MiB/s (73.1MB/s)(4180MiB/60002msec); 0 zone resets 00:17:04.409 slat (nsec): min=1951, max=274304, avg=6799.51, stdev=2921.46 00:17:04.409 clat (usec): min=1263, max=6524.9k, avg=3685.81, stdev=53850.66 00:17:04.409 lat (usec): min=1270, max=6524.9k, avg=3692.61, stdev=53850.64 00:17:04.409 clat percentiles (usec): 00:17:04.409 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:17:04.409 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:17:04.409 | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 4178], 00:17:04.409 | 99.00th=[ 5735], 99.50th=[ 6587], 99.90th=[ 7701], 99.95th=[ 8848], 00:17:04.409 | 99.99th=[13435] 00:17:04.409 bw ( KiB/s): min=18640, max=83840, per=100.00%, avg=79311.14, stdev=8909.73, samples=107 00:17:04.409 iops : min= 4660, max=20960, avg=19827.76, stdev=2227.43, samples=107 00:17:04.409 lat (msec) : 2=0.05%, 4=93.90%, 10=6.02%, 20=0.02%, >=2000=0.01% 00:17:04.409 cpu : usr=10.52%, sys=22.46%, ctx=72735, majf=0, minf=13 00:17:04.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:04.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:04.409 issued rwts: total=1070839,1070138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.409 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:04.409 00:17:04.409 Run status group 0 (all jobs): 00:17:04.409 READ: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=4183MiB (4386MB), run=60002-60002msec 00:17:04.409 WRITE: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=4180MiB (4383MB), run=60002-60002msec 00:17:04.409 00:17:04.409 Disk stats (read/write): 00:17:04.409 ublkb1: ios=1068623/1067829, merge=0/0, ticks=3615139/3718200, in_queue=7333340, util=99.94% 00:17:04.409 07:56:58 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.409 [2024-10-09 07:56:58.871621] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:04.409 [2024-10-09 07:56:58.911407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:04.409 [2024-10-09 07:56:58.911644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:04.409 [2024-10-09 07:56:58.920409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:04.409 [2024-10-09 07:56:58.920555] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:04.409 [2024-10-09 07:56:58.920584] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.409 07:56:58 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.409 [2024-10-09 07:56:58.935474] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:04.409 [2024-10-09 07:56:58.938344] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:04.409 [2024-10-09 07:56:58.938390] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.409 07:56:58 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:04.409 07:56:58 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:04.409 07:56:58 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73728 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73728 ']' 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73728 00:17:04.409 07:56:58 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:17:04.410 07:56:58 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.410 07:56:58 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73728 00:17:04.410 07:56:58 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.410 07:56:58 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.410 killing process with pid 73728 00:17:04.410 07:56:58 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73728' 00:17:04.410 07:56:58 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73728 00:17:04.410 07:56:58 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73728 00:17:04.410 [2024-10-09 07:57:00.446399] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:04.410 [2024-10-09 07:57:00.446470] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:04.410 00:17:04.410 real 1m6.054s 00:17:04.410 user 1m48.831s 00:17:04.410 sys 0m32.019s 00:17:04.410 07:57:01 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.410 ************************************ 00:17:04.410 END TEST ublk_recovery 00:17:04.410 ************************************ 00:17:04.410 07:57:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.410 07:57:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:04.410 07:57:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.410 07:57:01 -- common/autotest_common.sh@10 -- # set +x 00:17:04.410 07:57:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:17:04.410 07:57:01 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:04.410 07:57:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:04.410 07:57:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.410 07:57:01 -- common/autotest_common.sh@10 -- # set +x 00:17:04.410 ************************************ 00:17:04.410 START TEST ftl 00:17:04.410 ************************************ 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:04.410 * Looking for test storage... 00:17:04.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.410 07:57:02 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.410 07:57:02 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.410 07:57:02 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.410 07:57:02 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.410 07:57:02 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.410 07:57:02 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.410 07:57:02 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.410 07:57:02 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:04.410 07:57:02 ftl -- scripts/common.sh@345 -- # : 1 00:17:04.410 07:57:02 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.410 07:57:02 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.410 07:57:02 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:04.410 07:57:02 ftl -- scripts/common.sh@353 -- # local d=1 00:17:04.410 07:57:02 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.410 07:57:02 ftl -- scripts/common.sh@355 -- # echo 1 00:17:04.410 07:57:02 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.410 07:57:02 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@353 -- # local d=2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.410 07:57:02 ftl -- scripts/common.sh@355 -- # echo 2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.410 07:57:02 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.410 07:57:02 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.410 07:57:02 ftl -- scripts/common.sh@368 -- # return 0 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.410 --rc genhtml_branch_coverage=1 00:17:04.410 --rc genhtml_function_coverage=1 00:17:04.410 --rc genhtml_legend=1 00:17:04.410 --rc geninfo_all_blocks=1 00:17:04.410 --rc geninfo_unexecuted_blocks=1 00:17:04.410 00:17:04.410 ' 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.410 --rc genhtml_branch_coverage=1 00:17:04.410 --rc genhtml_function_coverage=1 00:17:04.410 --rc genhtml_legend=1 00:17:04.410 --rc geninfo_all_blocks=1 00:17:04.410 --rc geninfo_unexecuted_blocks=1 00:17:04.410 00:17:04.410 ' 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.410 --rc genhtml_branch_coverage=1 00:17:04.410 --rc genhtml_function_coverage=1 00:17:04.410 --rc genhtml_legend=1 00:17:04.410 --rc geninfo_all_blocks=1 00:17:04.410 --rc geninfo_unexecuted_blocks=1 00:17:04.410 00:17:04.410 ' 00:17:04.410 07:57:02 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.410 --rc genhtml_branch_coverage=1 00:17:04.410 --rc genhtml_function_coverage=1 00:17:04.410 --rc genhtml_legend=1 00:17:04.410 --rc geninfo_all_blocks=1 00:17:04.410 --rc geninfo_unexecuted_blocks=1 00:17:04.410 00:17:04.410 ' 00:17:04.410 07:57:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:04.410 07:57:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:04.410 07:57:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:04.410 07:57:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:04.410 07:57:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:04.410 07:57:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:04.410 07:57:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.410 07:57:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:04.410 07:57:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:04.410 07:57:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:04.410 07:57:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:04.410 07:57:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:04.410 07:57:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:04.410 07:57:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:04.410 07:57:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:04.410 07:57:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:04.410 07:57:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:04.410 07:57:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:04.410 07:57:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:04.410 07:57:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:04.410 07:57:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:04.410 07:57:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:04.410 07:57:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:04.410 07:57:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:04.410 07:57:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:04.410 07:57:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:04.410 07:57:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:04.410 07:57:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:04.410 07:57:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:04.410 07:57:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.410 07:57:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:04.410 07:57:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:04.411 07:57:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:04.411 07:57:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:04.411 07:57:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:04.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.411 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:04.411 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:04.411 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:04.411 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:04.411 07:57:02 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74525 00:17:04.411 07:57:02 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:04.411 07:57:02 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74525 00:17:04.411 07:57:02 ftl -- common/autotest_common.sh@831 -- # '[' -z 74525 ']' 00:17:04.411 07:57:02 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.411 07:57:02 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.411 07:57:02 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.411 07:57:02 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.411 07:57:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:04.411 [2024-10-09 07:57:02.793755] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:17:04.411 [2024-10-09 07:57:02.793904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74525 ] 00:17:04.411 [2024-10-09 07:57:02.956052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.411 [2024-10-09 07:57:03.207977] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.411 07:57:03 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.411 07:57:03 ftl -- common/autotest_common.sh@864 -- # return 0 00:17:04.411 07:57:03 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:04.411 07:57:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:04.411 07:57:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:04.411 07:57:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:04.411 07:57:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:04.411 07:57:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:04.411 07:57:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@50 -- # break 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@63 -- # break 00:17:04.411 07:57:06 ftl -- ftl/ftl.sh@66 -- # killprocess 74525 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@950 -- # '[' -z 74525 ']' 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@954 -- # kill -0 74525 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@955 -- # uname 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74525 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.411 killing process with pid 74525 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74525' 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@969 -- # kill 74525 00:17:04.411 07:57:06 ftl -- common/autotest_common.sh@974 -- # wait 74525 00:17:06.989 07:57:08 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:06.989 07:57:08 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:06.989 07:57:08 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:06.989 07:57:08 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.989 07:57:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:06.989 ************************************ 00:17:06.989 START TEST ftl_fio_basic 00:17:06.989 ************************************ 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:06.989 * Looking for test storage... 00:17:06.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:06.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.989 --rc genhtml_branch_coverage=1 00:17:06.989 --rc genhtml_function_coverage=1 00:17:06.989 --rc genhtml_legend=1 00:17:06.989 --rc geninfo_all_blocks=1 00:17:06.989 --rc geninfo_unexecuted_blocks=1 00:17:06.989 00:17:06.989 ' 00:17:06.989 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:06.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.989 --rc genhtml_branch_coverage=1 00:17:06.989 --rc genhtml_function_coverage=1 00:17:06.989 --rc genhtml_legend=1 00:17:06.989 --rc geninfo_all_blocks=1 00:17:06.990 --rc geninfo_unexecuted_blocks=1 00:17:06.990 00:17:06.990 ' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:06.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.990 --rc genhtml_branch_coverage=1 00:17:06.990 --rc genhtml_function_coverage=1 00:17:06.990 --rc genhtml_legend=1 00:17:06.990 --rc geninfo_all_blocks=1 00:17:06.990 --rc geninfo_unexecuted_blocks=1 00:17:06.990 00:17:06.990 ' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:06.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.990 --rc genhtml_branch_coverage=1 00:17:06.990 --rc genhtml_function_coverage=1 00:17:06.990 --rc genhtml_legend=1 00:17:06.990 --rc geninfo_all_blocks=1 00:17:06.990 --rc geninfo_unexecuted_blocks=1 00:17:06.990 00:17:06.990 ' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74679 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74679 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74679 ']' 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.990 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:07.249 [2024-10-09 07:57:09.109555] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:17:07.249 [2024-10-09 07:57:09.109749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74679 ] 00:17:07.507 [2024-10-09 07:57:09.282060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.507 [2024-10-09 07:57:09.472522] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.507 [2024-10-09 07:57:09.472607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.507 [2024-10-09 07:57:09.472615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:08.441 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:08.699 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:08.957 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:08.957 { 00:17:08.957 "name": "nvme0n1", 00:17:08.957 "aliases": [ 00:17:08.957 "b77fded3-e4dc-48b9-9a30-6f4602cd6c75" 00:17:08.957 ], 00:17:08.957 "product_name": "NVMe disk", 00:17:08.957 "block_size": 4096, 00:17:08.957 "num_blocks": 1310720, 00:17:08.957 "uuid": "b77fded3-e4dc-48b9-9a30-6f4602cd6c75", 00:17:08.957 "numa_id": -1, 00:17:08.957 "assigned_rate_limits": { 00:17:08.957 "rw_ios_per_sec": 0, 00:17:08.957 "rw_mbytes_per_sec": 0, 00:17:08.957 "r_mbytes_per_sec": 0, 00:17:08.957 "w_mbytes_per_sec": 0 00:17:08.957 }, 00:17:08.957 "claimed": false, 00:17:08.957 "zoned": false, 00:17:08.957 "supported_io_types": { 00:17:08.957 "read": true, 00:17:08.957 "write": true, 00:17:08.957 "unmap": true, 00:17:08.957 "flush": true, 00:17:08.957 "reset": true, 00:17:08.957 "nvme_admin": true, 00:17:08.957 "nvme_io": true, 00:17:08.957 "nvme_io_md": false, 00:17:08.957 "write_zeroes": true, 00:17:08.957 "zcopy": false, 00:17:08.957 "get_zone_info": false, 00:17:08.957 "zone_management": false, 00:17:08.957 "zone_append": false, 00:17:08.957 "compare": true, 00:17:08.957 "compare_and_write": false, 00:17:08.957 "abort": true, 00:17:08.957 "seek_hole": false, 00:17:08.957 "seek_data": false, 00:17:08.957 "copy": true, 00:17:08.957 "nvme_iov_md": false 00:17:08.957 }, 00:17:08.957 "driver_specific": { 00:17:08.957 "nvme": [ 00:17:08.957 { 00:17:08.957 "pci_address": "0000:00:11.0", 00:17:08.957 "trid": { 00:17:08.957 "trtype": "PCIe", 00:17:08.957 "traddr": "0000:00:11.0" 00:17:08.957 }, 00:17:08.957 "ctrlr_data": { 00:17:08.957 "cntlid": 0, 00:17:08.957 "vendor_id": "0x1b36", 00:17:08.957 "model_number": "QEMU NVMe Ctrl", 00:17:08.957 "serial_number": "12341", 00:17:08.957 "firmware_revision": "8.0.0", 00:17:08.957 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:08.957 "oacs": { 00:17:08.957 "security": 0, 00:17:08.957 "format": 1, 00:17:08.957 "firmware": 0, 00:17:08.957 "ns_manage": 1 00:17:08.957 }, 00:17:08.957 "multi_ctrlr": false, 00:17:08.957 "ana_reporting": false 00:17:08.957 }, 00:17:08.957 "vs": { 00:17:08.957 "nvme_version": "1.4" 00:17:08.957 }, 00:17:08.957 "ns_data": { 00:17:08.957 "id": 1, 00:17:08.957 "can_share": false 00:17:08.957 } 00:17:08.957 } 00:17:08.957 ], 00:17:08.957 "mp_policy": "active_passive" 00:17:08.957 } 00:17:08.957 } 00:17:08.957 ]' 00:17:08.957 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:08.957 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:08.957 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:08.957 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:08.957 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:08.957 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:17:09.215 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:09.215 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:09.215 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:09.215 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:09.215 07:57:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:09.472 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:09.472 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:09.730 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=b2f69695-470c-4fa2-946c-44fb17b79817 00:17:09.730 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b2f69695-470c-4fa2-946c-44fb17b79817 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:10.013 07:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.271 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:10.272 { 00:17:10.272 "name": "9a153477-45b1-4d8c-b5a9-9eb781c785a6", 00:17:10.272 "aliases": [ 00:17:10.272 "lvs/nvme0n1p0" 00:17:10.272 ], 00:17:10.272 "product_name": "Logical Volume", 00:17:10.272 "block_size": 4096, 00:17:10.272 "num_blocks": 26476544, 00:17:10.272 "uuid": "9a153477-45b1-4d8c-b5a9-9eb781c785a6", 00:17:10.272 "assigned_rate_limits": { 00:17:10.272 "rw_ios_per_sec": 0, 00:17:10.272 "rw_mbytes_per_sec": 0, 00:17:10.272 "r_mbytes_per_sec": 0, 00:17:10.272 "w_mbytes_per_sec": 0 00:17:10.272 }, 00:17:10.272 "claimed": false, 00:17:10.272 "zoned": false, 00:17:10.272 "supported_io_types": { 00:17:10.272 "read": true, 00:17:10.272 "write": true, 00:17:10.272 "unmap": true, 00:17:10.272 "flush": false, 00:17:10.272 "reset": true, 00:17:10.272 "nvme_admin": false, 00:17:10.272 "nvme_io": false, 00:17:10.272 "nvme_io_md": false, 00:17:10.272 "write_zeroes": true, 00:17:10.272 "zcopy": false, 00:17:10.272 "get_zone_info": false, 00:17:10.272 "zone_management": false, 00:17:10.272 "zone_append": false, 00:17:10.272 "compare": false, 00:17:10.272 "compare_and_write": false, 00:17:10.272 "abort": false, 00:17:10.272 "seek_hole": true, 00:17:10.272 "seek_data": true, 00:17:10.272 "copy": false, 00:17:10.272 "nvme_iov_md": false 00:17:10.272 }, 00:17:10.272 "driver_specific": { 00:17:10.272 "lvol": { 00:17:10.272 "lvol_store_uuid": "b2f69695-470c-4fa2-946c-44fb17b79817", 00:17:10.272 "base_bdev": "nvme0n1", 00:17:10.272 "thin_provision": true, 00:17:10.272 "num_allocated_clusters": 0, 00:17:10.272 "snapshot": false, 00:17:10.272 "clone": false, 00:17:10.272 "esnap_clone": false 00:17:10.272 } 00:17:10.272 } 00:17:10.272 } 00:17:10.272 ]' 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:10.272 07:57:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:10.838 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:11.098 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:11.098 { 00:17:11.098 "name": "9a153477-45b1-4d8c-b5a9-9eb781c785a6", 00:17:11.098 "aliases": [ 00:17:11.098 "lvs/nvme0n1p0" 00:17:11.098 ], 00:17:11.098 "product_name": "Logical Volume", 00:17:11.098 "block_size": 4096, 00:17:11.098 "num_blocks": 26476544, 00:17:11.098 "uuid": "9a153477-45b1-4d8c-b5a9-9eb781c785a6", 00:17:11.098 "assigned_rate_limits": { 00:17:11.098 "rw_ios_per_sec": 0, 00:17:11.098 "rw_mbytes_per_sec": 0, 00:17:11.098 "r_mbytes_per_sec": 0, 00:17:11.098 "w_mbytes_per_sec": 0 00:17:11.098 }, 00:17:11.098 "claimed": false, 00:17:11.098 "zoned": false, 00:17:11.098 "supported_io_types": { 00:17:11.098 "read": true, 00:17:11.098 "write": true, 00:17:11.098 "unmap": true, 00:17:11.098 "flush": false, 00:17:11.098 "reset": true, 00:17:11.098 "nvme_admin": false, 00:17:11.098 "nvme_io": false, 00:17:11.098 "nvme_io_md": false, 00:17:11.098 "write_zeroes": true, 00:17:11.098 "zcopy": false, 00:17:11.098 "get_zone_info": false, 00:17:11.098 "zone_management": false, 00:17:11.098 "zone_append": false, 00:17:11.098 "compare": false, 00:17:11.098 "compare_and_write": false, 00:17:11.098 "abort": false, 00:17:11.098 "seek_hole": true, 00:17:11.098 "seek_data": true, 00:17:11.098 "copy": false, 00:17:11.098 "nvme_iov_md": false 00:17:11.098 }, 00:17:11.098 "driver_specific": { 00:17:11.098 "lvol": { 00:17:11.098 "lvol_store_uuid": "b2f69695-470c-4fa2-946c-44fb17b79817", 00:17:11.098 "base_bdev": "nvme0n1", 00:17:11.098 "thin_provision": true, 00:17:11.098 "num_allocated_clusters": 0, 00:17:11.098 "snapshot": false, 00:17:11.098 "clone": false, 00:17:11.098 "esnap_clone": false 00:17:11.098 } 00:17:11.098 } 00:17:11.098 } 00:17:11.098 ]' 00:17:11.098 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:11.098 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:11.098 07:57:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:11.098 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:11.098 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:11.098 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:11.098 07:57:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:11.098 07:57:13 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:11.357 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:11.357 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a153477-45b1-4d8c-b5a9-9eb781c785a6 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:11.925 { 00:17:11.925 "name": "9a153477-45b1-4d8c-b5a9-9eb781c785a6", 00:17:11.925 "aliases": [ 00:17:11.925 "lvs/nvme0n1p0" 00:17:11.925 ], 00:17:11.925 "product_name": "Logical Volume", 00:17:11.925 "block_size": 4096, 00:17:11.925 "num_blocks": 26476544, 00:17:11.925 "uuid": "9a153477-45b1-4d8c-b5a9-9eb781c785a6", 00:17:11.925 "assigned_rate_limits": { 00:17:11.925 "rw_ios_per_sec": 0, 00:17:11.925 "rw_mbytes_per_sec": 0, 00:17:11.925 "r_mbytes_per_sec": 0, 00:17:11.925 "w_mbytes_per_sec": 0 00:17:11.925 }, 00:17:11.925 "claimed": false, 00:17:11.925 "zoned": false, 00:17:11.925 "supported_io_types": { 00:17:11.925 "read": true, 00:17:11.925 "write": true, 00:17:11.925 "unmap": true, 00:17:11.925 "flush": false, 00:17:11.925 "reset": true, 00:17:11.925 "nvme_admin": false, 00:17:11.925 "nvme_io": false, 00:17:11.925 "nvme_io_md": false, 00:17:11.925 "write_zeroes": true, 00:17:11.925 "zcopy": false, 00:17:11.925 "get_zone_info": false, 00:17:11.925 "zone_management": false, 00:17:11.925 "zone_append": false, 00:17:11.925 "compare": false, 00:17:11.925 "compare_and_write": false, 00:17:11.925 "abort": false, 00:17:11.925 "seek_hole": true, 00:17:11.925 "seek_data": true, 00:17:11.925 "copy": false, 00:17:11.925 "nvme_iov_md": false 00:17:11.925 }, 00:17:11.925 "driver_specific": { 00:17:11.925 "lvol": { 00:17:11.925 "lvol_store_uuid": "b2f69695-470c-4fa2-946c-44fb17b79817", 00:17:11.925 "base_bdev": "nvme0n1", 00:17:11.925 "thin_provision": true, 00:17:11.925 "num_allocated_clusters": 0, 00:17:11.925 "snapshot": false, 00:17:11.925 "clone": false, 00:17:11.925 "esnap_clone": false 00:17:11.925 } 00:17:11.925 } 00:17:11.925 } 00:17:11.925 ]' 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:11.925 07:57:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9a153477-45b1-4d8c-b5a9-9eb781c785a6 -c nvc0n1p0 --l2p_dram_limit 60 00:17:12.184 [2024-10-09 07:57:14.031081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.031152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:12.184 [2024-10-09 07:57:14.031177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:12.184 [2024-10-09 07:57:14.031194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.031282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.031302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.184 [2024-10-09 07:57:14.031318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:12.184 [2024-10-09 07:57:14.031345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.031406] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:12.184 [2024-10-09 07:57:14.032521] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:12.184 [2024-10-09 07:57:14.032593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.032620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.184 [2024-10-09 07:57:14.032651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:17:12.184 [2024-10-09 07:57:14.032674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.032922] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f969091b-b94b-4ef6-aa02-bb0012554e58 00:17:12.184 [2024-10-09 07:57:14.034077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.034124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:12.184 [2024-10-09 07:57:14.034142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:12.184 [2024-10-09 07:57:14.034156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.039007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.039072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.184 [2024-10-09 07:57:14.039091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.765 ms 00:17:12.184 [2024-10-09 07:57:14.039106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.039266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.039295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.184 [2024-10-09 07:57:14.039311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:12.184 [2024-10-09 07:57:14.039344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.039452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.039486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:12.184 [2024-10-09 07:57:14.039501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:12.184 [2024-10-09 07:57:14.039515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.039574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:12.184 [2024-10-09 07:57:14.044130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.044170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.184 [2024-10-09 07:57:14.044189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.562 ms 00:17:12.184 [2024-10-09 07:57:14.044202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.044260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.044277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:12.184 [2024-10-09 07:57:14.044292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:12.184 [2024-10-09 07:57:14.044304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.044396] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:12.184 [2024-10-09 07:57:14.044591] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:12.184 [2024-10-09 07:57:14.044630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:12.184 [2024-10-09 07:57:14.044648] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:12.184 [2024-10-09 07:57:14.044671] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:12.184 [2024-10-09 07:57:14.044685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:12.184 [2024-10-09 07:57:14.044700] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:12.184 [2024-10-09 07:57:14.044720] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:12.184 [2024-10-09 07:57:14.044734] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:12.184 [2024-10-09 07:57:14.044747] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:12.184 [2024-10-09 07:57:14.044762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.044775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:12.184 [2024-10-09 07:57:14.044791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:17:12.184 [2024-10-09 07:57:14.044804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.044921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.184 [2024-10-09 07:57:14.044944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:12.184 [2024-10-09 07:57:14.044960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:12.184 [2024-10-09 07:57:14.044972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.184 [2024-10-09 07:57:14.045100] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:12.184 [2024-10-09 07:57:14.045126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:12.184 [2024-10-09 07:57:14.045143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.184 [2024-10-09 07:57:14.045156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.184 [2024-10-09 07:57:14.045170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:12.184 [2024-10-09 07:57:14.045182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:12.184 [2024-10-09 07:57:14.045195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:12.184 [2024-10-09 07:57:14.045207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:12.184 [2024-10-09 07:57:14.045220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:12.184 [2024-10-09 07:57:14.045231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.184 [2024-10-09 07:57:14.045244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:12.184 [2024-10-09 07:57:14.045256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:12.184 [2024-10-09 07:57:14.045268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.184 [2024-10-09 07:57:14.045280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:12.185 [2024-10-09 07:57:14.045293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:12.185 [2024-10-09 07:57:14.045305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:12.185 [2024-10-09 07:57:14.045359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:12.185 [2024-10-09 07:57:14.045376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:12.185 [2024-10-09 07:57:14.045403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.185 [2024-10-09 07:57:14.045428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:12.185 [2024-10-09 07:57:14.045439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.185 [2024-10-09 07:57:14.045463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:12.185 [2024-10-09 07:57:14.045476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.185 [2024-10-09 07:57:14.045501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:12.185 [2024-10-09 07:57:14.045512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.185 [2024-10-09 07:57:14.045536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:12.185 [2024-10-09 07:57:14.045551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.185 [2024-10-09 07:57:14.045577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:12.185 [2024-10-09 07:57:14.045588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:12.185 [2024-10-09 07:57:14.045601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.185 [2024-10-09 07:57:14.045612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:12.185 [2024-10-09 07:57:14.045625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:12.185 [2024-10-09 07:57:14.045658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:12.185 [2024-10-09 07:57:14.045685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:12.185 [2024-10-09 07:57:14.045698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045710] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:12.185 [2024-10-09 07:57:14.045731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:12.185 [2024-10-09 07:57:14.045746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.185 [2024-10-09 07:57:14.045760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.185 [2024-10-09 07:57:14.045772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:12.185 [2024-10-09 07:57:14.045789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:12.185 [2024-10-09 07:57:14.045803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:12.185 [2024-10-09 07:57:14.045816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:12.185 [2024-10-09 07:57:14.045828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:12.185 [2024-10-09 07:57:14.045841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:12.185 [2024-10-09 07:57:14.045870] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:12.185 [2024-10-09 07:57:14.045898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.185 [2024-10-09 07:57:14.045913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:12.185 [2024-10-09 07:57:14.045931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:12.185 [2024-10-09 07:57:14.045945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:12.185 [2024-10-09 07:57:14.045962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:12.185 [2024-10-09 07:57:14.045976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:12.185 [2024-10-09 07:57:14.045990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:12.185 [2024-10-09 07:57:14.046002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:12.185 [2024-10-09 07:57:14.046016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:12.185 [2024-10-09 07:57:14.046028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:12.185 [2024-10-09 07:57:14.046046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:12.185 [2024-10-09 07:57:14.046059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:12.185 [2024-10-09 07:57:14.046073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:12.185 [2024-10-09 07:57:14.046085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:12.185 [2024-10-09 07:57:14.046099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:12.185 [2024-10-09 07:57:14.046112] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:12.185 [2024-10-09 07:57:14.046133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.185 [2024-10-09 07:57:14.046148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:12.185 [2024-10-09 07:57:14.046162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:12.185 [2024-10-09 07:57:14.046174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:12.185 [2024-10-09 07:57:14.046189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:12.185 [2024-10-09 07:57:14.046203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.185 [2024-10-09 07:57:14.046217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:12.185 [2024-10-09 07:57:14.046230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:17:12.185 [2024-10-09 07:57:14.046243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.185 [2024-10-09 07:57:14.046322] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:12.185 [2024-10-09 07:57:14.046369] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:15.477 [2024-10-09 07:57:17.136952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.137044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:15.477 [2024-10-09 07:57:17.137081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3090.645 ms 00:17:15.477 [2024-10-09 07:57:17.137108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.180900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.180966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.477 [2024-10-09 07:57:17.180988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.376 ms 00:17:15.477 [2024-10-09 07:57:17.181004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.181206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.181232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.477 [2024-10-09 07:57:17.181247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:15.477 [2024-10-09 07:57:17.181264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.222261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.222323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.477 [2024-10-09 07:57:17.222356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.883 ms 00:17:15.477 [2024-10-09 07:57:17.222375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.222440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.222460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.477 [2024-10-09 07:57:17.222476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:15.477 [2024-10-09 07:57:17.222490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.222867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.222906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.477 [2024-10-09 07:57:17.222923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:15.477 [2024-10-09 07:57:17.222937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.223134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.223164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.477 [2024-10-09 07:57:17.223179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:15.477 [2024-10-09 07:57:17.223199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.241242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.241313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.477 [2024-10-09 07:57:17.241350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.006 ms 00:17:15.477 [2024-10-09 07:57:17.241367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.254838] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:15.477 [2024-10-09 07:57:17.268667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.268732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:15.477 [2024-10-09 07:57:17.268755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.145 ms 00:17:15.477 [2024-10-09 07:57:17.268768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.323662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.323732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:15.477 [2024-10-09 07:57:17.323756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.823 ms 00:17:15.477 [2024-10-09 07:57:17.323770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.324026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.324055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:15.477 [2024-10-09 07:57:17.324081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:17:15.477 [2024-10-09 07:57:17.324094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.355418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.355474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:15.477 [2024-10-09 07:57:17.355496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.224 ms 00:17:15.477 [2024-10-09 07:57:17.355510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.386275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.386327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:15.477 [2024-10-09 07:57:17.386362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.694 ms 00:17:15.477 [2024-10-09 07:57:17.386375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.387132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.387167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:15.477 [2024-10-09 07:57:17.387186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:17:15.477 [2024-10-09 07:57:17.387198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.477 [2024-10-09 07:57:17.477163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.477 [2024-10-09 07:57:17.477273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:15.477 [2024-10-09 07:57:17.477323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.846 ms 00:17:15.477 [2024-10-09 07:57:17.477370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.736 [2024-10-09 07:57:17.525440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.736 [2024-10-09 07:57:17.525523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:15.736 [2024-10-09 07:57:17.525549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.846 ms 00:17:15.736 [2024-10-09 07:57:17.525562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.736 [2024-10-09 07:57:17.558287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.737 [2024-10-09 07:57:17.558370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:15.737 [2024-10-09 07:57:17.558395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.634 ms 00:17:15.737 [2024-10-09 07:57:17.558408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.737 [2024-10-09 07:57:17.590560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.737 [2024-10-09 07:57:17.590646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:15.737 [2024-10-09 07:57:17.590671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.075 ms 00:17:15.737 [2024-10-09 07:57:17.590684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.737 [2024-10-09 07:57:17.590760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.737 [2024-10-09 07:57:17.590780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:15.737 [2024-10-09 07:57:17.590800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:15.737 [2024-10-09 07:57:17.590813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.737 [2024-10-09 07:57:17.590988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.737 [2024-10-09 07:57:17.591020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:15.737 [2024-10-09 07:57:17.591037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:15.737 [2024-10-09 07:57:17.591053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.737 [2024-10-09 07:57:17.592292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3560.701 ms, result 0 00:17:15.737 { 00:17:15.737 "name": "ftl0", 00:17:15.737 "uuid": "f969091b-b94b-4ef6-aa02-bb0012554e58" 00:17:15.737 } 00:17:15.737 07:57:17 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:15.737 07:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:15.737 07:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:15.737 07:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:17:15.737 07:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:15.737 07:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:15.737 07:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:15.995 07:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:16.253 [ 00:17:16.253 { 00:17:16.253 "name": "ftl0", 00:17:16.253 "aliases": [ 00:17:16.253 "f969091b-b94b-4ef6-aa02-bb0012554e58" 00:17:16.253 ], 00:17:16.253 "product_name": "FTL disk", 00:17:16.253 "block_size": 4096, 00:17:16.253 "num_blocks": 20971520, 00:17:16.253 "uuid": "f969091b-b94b-4ef6-aa02-bb0012554e58", 00:17:16.253 "assigned_rate_limits": { 00:17:16.253 "rw_ios_per_sec": 0, 00:17:16.253 "rw_mbytes_per_sec": 0, 00:17:16.253 "r_mbytes_per_sec": 0, 00:17:16.253 "w_mbytes_per_sec": 0 00:17:16.253 }, 00:17:16.253 "claimed": false, 00:17:16.253 "zoned": false, 00:17:16.253 "supported_io_types": { 00:17:16.253 "read": true, 00:17:16.253 "write": true, 00:17:16.253 "unmap": true, 00:17:16.253 "flush": true, 00:17:16.253 "reset": false, 00:17:16.253 "nvme_admin": false, 00:17:16.253 "nvme_io": false, 00:17:16.253 "nvme_io_md": false, 00:17:16.253 "write_zeroes": true, 00:17:16.253 "zcopy": false, 00:17:16.253 "get_zone_info": false, 00:17:16.253 "zone_management": false, 00:17:16.253 "zone_append": false, 00:17:16.253 "compare": false, 00:17:16.253 "compare_and_write": false, 00:17:16.253 "abort": false, 00:17:16.253 "seek_hole": false, 00:17:16.253 "seek_data": false, 00:17:16.253 "copy": false, 00:17:16.253 "nvme_iov_md": false 00:17:16.253 }, 00:17:16.253 "driver_specific": { 00:17:16.253 "ftl": { 00:17:16.253 "base_bdev": "9a153477-45b1-4d8c-b5a9-9eb781c785a6", 00:17:16.253 "cache": "nvc0n1p0" 00:17:16.253 } 00:17:16.253 } 00:17:16.253 } 00:17:16.253 ] 00:17:16.253 07:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:17:16.253 07:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:16.253 07:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:16.512 07:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:16.512 07:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:16.771 [2024-10-09 07:57:18.749696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.771 [2024-10-09 07:57:18.749770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:16.771 [2024-10-09 07:57:18.749792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:16.771 [2024-10-09 07:57:18.749807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.771 [2024-10-09 07:57:18.749861] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.771 [2024-10-09 07:57:18.753284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.771 [2024-10-09 07:57:18.753324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:16.771 [2024-10-09 07:57:18.753359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:17:16.771 [2024-10-09 07:57:18.753376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.771 [2024-10-09 07:57:18.753896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.771 [2024-10-09 07:57:18.753933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:16.771 [2024-10-09 07:57:18.753952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:17:16.771 [2024-10-09 07:57:18.753965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.771 [2024-10-09 07:57:18.757270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.771 [2024-10-09 07:57:18.757304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:16.771 [2024-10-09 07:57:18.757322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:17:16.771 [2024-10-09 07:57:18.757347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.771 [2024-10-09 07:57:18.764055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.771 [2024-10-09 07:57:18.764094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:16.771 [2024-10-09 07:57:18.764111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:17:16.771 [2024-10-09 07:57:18.764124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.031 [2024-10-09 07:57:18.795458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.031 [2024-10-09 07:57:18.795518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:17.031 [2024-10-09 07:57:18.795541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.224 ms 00:17:17.031 [2024-10-09 07:57:18.795564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.032 [2024-10-09 07:57:18.814340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.032 [2024-10-09 07:57:18.814404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:17.032 [2024-10-09 07:57:18.814428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.694 ms 00:17:17.032 [2024-10-09 07:57:18.814441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.032 [2024-10-09 07:57:18.814726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.032 [2024-10-09 07:57:18.814774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:17.032 [2024-10-09 07:57:18.814793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:17:17.032 [2024-10-09 07:57:18.814809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.032 [2024-10-09 07:57:18.846707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.032 [2024-10-09 07:57:18.846779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:17.032 [2024-10-09 07:57:18.846803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.852 ms 00:17:17.032 [2024-10-09 07:57:18.846816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.032 [2024-10-09 07:57:18.886154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.032 [2024-10-09 07:57:18.886251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:17.032 [2024-10-09 07:57:18.886290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.249 ms 00:17:17.032 [2024-10-09 07:57:18.886313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.032 [2024-10-09 07:57:18.920358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.032 [2024-10-09 07:57:18.920423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:17.032 [2024-10-09 07:57:18.920447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.928 ms 00:17:17.032 [2024-10-09 07:57:18.920460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.032 [2024-10-09 07:57:18.952531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.032 [2024-10-09 07:57:18.952598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:17.032 [2024-10-09 07:57:18.952621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.898 ms 00:17:17.032 [2024-10-09 07:57:18.952635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.032 [2024-10-09 07:57:18.952715] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:17.032 [2024-10-09 07:57:18.952745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.952998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:17.032 [2024-10-09 07:57:18.953670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.953996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:17.033 [2024-10-09 07:57:18.954183] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:17.033 [2024-10-09 07:57:18.954197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f969091b-b94b-4ef6-aa02-bb0012554e58 00:17:17.033 [2024-10-09 07:57:18.954210] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:17.033 [2024-10-09 07:57:18.954226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:17.033 [2024-10-09 07:57:18.954238] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:17.033 [2024-10-09 07:57:18.954252] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:17.033 [2024-10-09 07:57:18.954263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:17.033 [2024-10-09 07:57:18.954278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:17.033 [2024-10-09 07:57:18.954290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:17.033 [2024-10-09 07:57:18.954303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:17.033 [2024-10-09 07:57:18.954313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:17.033 [2024-10-09 07:57:18.954327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.033 [2024-10-09 07:57:18.954354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:17.033 [2024-10-09 07:57:18.954374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:17:17.033 [2024-10-09 07:57:18.954387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.033 [2024-10-09 07:57:18.971274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.033 [2024-10-09 07:57:18.971323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:17.033 [2024-10-09 07:57:18.971364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.799 ms 00:17:17.033 [2024-10-09 07:57:18.971378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.033 [2024-10-09 07:57:18.971849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.033 [2024-10-09 07:57:18.971883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:17.033 [2024-10-09 07:57:18.971902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:17:17.033 [2024-10-09 07:57:18.971914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.033 [2024-10-09 07:57:19.030278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.033 [2024-10-09 07:57:19.030357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.033 [2024-10-09 07:57:19.030380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.033 [2024-10-09 07:57:19.030393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.033 [2024-10-09 07:57:19.030488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.033 [2024-10-09 07:57:19.030508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.033 [2024-10-09 07:57:19.030523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.033 [2024-10-09 07:57:19.030535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.033 [2024-10-09 07:57:19.030711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.033 [2024-10-09 07:57:19.030743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.033 [2024-10-09 07:57:19.030760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.033 [2024-10-09 07:57:19.030772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.033 [2024-10-09 07:57:19.030810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.033 [2024-10-09 07:57:19.030830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.033 [2024-10-09 07:57:19.030849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.033 [2024-10-09 07:57:19.030861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.292 [2024-10-09 07:57:19.141267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.292 [2024-10-09 07:57:19.141368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.292 [2024-10-09 07:57:19.141394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.292 [2024-10-09 07:57:19.141408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.292 [2024-10-09 07:57:19.227066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.292 [2024-10-09 07:57:19.227137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.292 [2024-10-09 07:57:19.227160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.292 [2024-10-09 07:57:19.227173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.292 [2024-10-09 07:57:19.227316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.292 [2024-10-09 07:57:19.227360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.292 [2024-10-09 07:57:19.227378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.292 [2024-10-09 07:57:19.227391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.292 [2024-10-09 07:57:19.227485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.292 [2024-10-09 07:57:19.227504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.292 [2024-10-09 07:57:19.227520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.293 [2024-10-09 07:57:19.227535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.293 [2024-10-09 07:57:19.227705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.293 [2024-10-09 07:57:19.227727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.293 [2024-10-09 07:57:19.227743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.293 [2024-10-09 07:57:19.227755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.293 [2024-10-09 07:57:19.227838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.293 [2024-10-09 07:57:19.227858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:17.293 [2024-10-09 07:57:19.227873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.293 [2024-10-09 07:57:19.227885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.293 [2024-10-09 07:57:19.227955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.293 [2024-10-09 07:57:19.227971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.293 [2024-10-09 07:57:19.227986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.293 [2024-10-09 07:57:19.227998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.293 [2024-10-09 07:57:19.228070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.293 [2024-10-09 07:57:19.228089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.293 [2024-10-09 07:57:19.228103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.293 [2024-10-09 07:57:19.228118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.293 [2024-10-09 07:57:19.228309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 478.604 ms, result 0 00:17:17.293 true 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74679 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74679 ']' 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74679 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74679 00:17:17.293 killing process with pid 74679 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74679' 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74679 00:17:17.293 07:57:19 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74679 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:22.624 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:22.624 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:22.624 fio-3.35 00:17:22.624 Starting 1 thread 00:17:27.890 00:17:27.890 test: (groupid=0, jobs=1): err= 0: pid=74898: Wed Oct 9 07:57:28 2024 00:17:27.890 read: IOPS=1007, BW=66.9MiB/s (70.2MB/s)(255MiB/3803msec) 00:17:27.890 slat (nsec): min=5702, max=38453, avg=7605.23, stdev=3251.99 00:17:27.890 clat (usec): min=281, max=784, avg=441.84, stdev=56.77 00:17:27.890 lat (usec): min=293, max=797, avg=449.44, stdev=57.31 00:17:27.890 clat percentiles (usec): 00:17:27.890 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 383], 00:17:27.890 | 30.00th=[ 404], 40.00th=[ 437], 50.00th=[ 445], 60.00th=[ 449], 00:17:27.890 | 70.00th=[ 457], 80.00th=[ 482], 90.00th=[ 519], 95.00th=[ 545], 00:17:27.890 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 701], 99.95th=[ 725], 00:17:27.890 | 99.99th=[ 783] 00:17:27.890 write: IOPS=1015, BW=67.4MiB/s (70.7MB/s)(256MiB/3799msec); 0 zone resets 00:17:27.890 slat (nsec): min=20270, max=98424, avg=24870.57, stdev=5805.40 00:17:27.890 clat (usec): min=331, max=2185, avg=500.25, stdev=73.51 00:17:27.890 lat (usec): min=368, max=2212, avg=525.12, stdev=73.94 00:17:27.890 clat percentiles (usec): 00:17:27.890 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 461], 00:17:27.890 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 486], 60.00th=[ 502], 00:17:27.890 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 611], 00:17:27.890 | 99.00th=[ 701], 99.50th=[ 758], 99.90th=[ 963], 99.95th=[ 1598], 00:17:27.890 | 99.99th=[ 2180] 00:17:27.890 bw ( KiB/s): min=66776, max=70312, per=99.84%, avg=68913.14, stdev=1462.06, samples=7 00:17:27.890 iops : min= 982, max= 1034, avg=1013.43, stdev=21.50, samples=7 00:17:27.890 lat (usec) : 500=72.06%, 750=27.65%, 1000=0.25% 00:17:27.890 lat (msec) : 2=0.03%, 4=0.01% 00:17:27.890 cpu : usr=99.11%, sys=0.11%, ctx=7, majf=0, minf=1169 00:17:27.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.890 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.890 00:17:27.890 Run status group 0 (all jobs): 00:17:27.890 READ: bw=66.9MiB/s (70.2MB/s), 66.9MiB/s-66.9MiB/s (70.2MB/s-70.2MB/s), io=255MiB (267MB), run=3803-3803msec 00:17:27.890 WRITE: bw=67.4MiB/s (70.7MB/s), 67.4MiB/s-67.4MiB/s (70.7MB/s-70.7MB/s), io=256MiB (269MB), run=3799-3799msec 00:17:28.825 ----------------------------------------------------- 00:17:28.825 Suppressions used: 00:17:28.825 count bytes template 00:17:28.825 1 5 /usr/src/fio/parse.c 00:17:28.825 1 8 libtcmalloc_minimal.so 00:17:28.825 1 904 libcrypto.so 00:17:28.825 ----------------------------------------------------- 00:17:28.825 00:17:28.825 07:57:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:28.825 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:28.825 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:29.083 07:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:29.342 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:29.342 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:29.342 fio-3.35 00:17:29.342 Starting 2 threads 00:18:08.096 00:18:08.096 first_half: (groupid=0, jobs=1): err= 0: pid=75001: Wed Oct 9 07:58:04 2024 00:18:08.096 read: IOPS=2038, BW=8154KiB/s (8349kB/s)(255MiB/32074msec) 00:18:08.096 slat (nsec): min=4620, max=48599, avg=7814.78, stdev=2343.39 00:18:08.096 clat (usec): min=874, max=429527, avg=47741.95, stdev=27013.59 00:18:08.096 lat (usec): min=884, max=429536, avg=47749.76, stdev=27013.81 00:18:08.096 clat percentiles (msec): 00:18:08.096 | 1.00th=[ 11], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:18:08.096 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 45], 00:18:08.096 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 55], 95.00th=[ 69], 00:18:08.096 | 99.00th=[ 205], 99.50th=[ 230], 99.90th=[ 279], 99.95th=[ 372], 00:18:08.096 | 99.99th=[ 426] 00:18:08.096 write: IOPS=2144, BW=8577KiB/s (8783kB/s)(256MiB/30564msec); 0 zone resets 00:18:08.096 slat (usec): min=5, max=513, avg= 9.96, stdev= 5.94 00:18:08.096 clat (usec): min=507, max=146596, avg=14984.31, stdev=25873.38 00:18:08.096 lat (usec): min=524, max=146607, avg=14994.27, stdev=25873.80 00:18:08.096 clat percentiles (usec): 00:18:08.096 | 1.00th=[ 1057], 5.00th=[ 1385], 10.00th=[ 1614], 20.00th=[ 2147], 00:18:08.096 | 30.00th=[ 3687], 40.00th=[ 5473], 50.00th=[ 6587], 60.00th=[ 7635], 00:18:08.096 | 70.00th=[ 9372], 80.00th=[ 14877], 90.00th=[ 33817], 95.00th=[ 91751], 00:18:08.096 | 99.00th=[117965], 99.50th=[123208], 99.90th=[135267], 99.95th=[139461], 00:18:08.096 | 99.99th=[143655] 00:18:08.096 bw ( KiB/s): min= 120, max=40288, per=95.55%, avg=16391.88, stdev=10665.60, samples=32 00:18:08.096 iops : min= 30, max=10072, avg=4097.94, stdev=2666.39, samples=32 00:18:08.096 lat (usec) : 750=0.04%, 1000=0.31% 00:18:08.096 lat (msec) : 2=8.58%, 4=7.13%, 10=20.31%, 20=8.74%, 50=41.46% 00:18:08.096 lat (msec) : 100=10.29%, 250=3.02%, 500=0.11% 00:18:08.096 cpu : usr=99.03%, sys=0.14%, ctx=50, majf=0, minf=5512 00:18:08.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.096 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.096 issued rwts: total=65381,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.096 second_half: (groupid=0, jobs=1): err= 0: pid=75002: Wed Oct 9 07:58:04 2024 00:18:08.096 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(255MiB/31893msec) 00:18:08.096 slat (nsec): min=4599, max=46258, avg=7798.55, stdev=2345.74 00:18:08.096 clat (usec): min=817, max=405204, avg=48356.64, stdev=24405.79 00:18:08.096 lat (usec): min=827, max=405214, avg=48364.44, stdev=24405.93 00:18:08.096 clat percentiles (msec): 00:18:08.096 | 1.00th=[ 11], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:18:08.096 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 46], 00:18:08.096 | 70.00th=[ 47], 80.00th=[ 51], 90.00th=[ 56], 95.00th=[ 70], 00:18:08.096 | 99.00th=[ 188], 99.50th=[ 211], 99.90th=[ 257], 99.95th=[ 317], 00:18:08.096 | 99.99th=[ 363] 00:18:08.096 write: IOPS=2520, BW=9.84MiB/s (10.3MB/s)(256MiB/26005msec); 0 zone resets 00:18:08.096 slat (usec): min=6, max=514, avg=10.01, stdev= 6.33 00:18:08.096 clat (usec): min=491, max=144906, avg=14117.13, stdev=25218.63 00:18:08.096 lat (usec): min=505, max=144915, avg=14127.14, stdev=25218.79 00:18:08.096 clat percentiles (usec): 00:18:08.096 | 1.00th=[ 1090], 5.00th=[ 1418], 10.00th=[ 1647], 20.00th=[ 2008], 00:18:08.096 | 30.00th=[ 2737], 40.00th=[ 4490], 50.00th=[ 6521], 60.00th=[ 8029], 00:18:08.096 | 70.00th=[ 9634], 80.00th=[ 14353], 90.00th=[ 20317], 95.00th=[ 90702], 00:18:08.096 | 99.00th=[115868], 99.50th=[123208], 99.90th=[135267], 99.95th=[139461], 00:18:08.096 | 99.99th=[143655] 00:18:08.096 bw ( KiB/s): min= 896, max=40232, per=100.00%, avg=18727.68, stdev=11974.88, samples=28 00:18:08.096 iops : min= 224, max=10058, avg=4681.89, stdev=2993.75, samples=28 00:18:08.096 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.26% 00:18:08.096 lat (msec) : 2=9.65%, 4=8.77%, 10=17.94%, 20=9.03%, 50=40.17% 00:18:08.096 lat (msec) : 100=11.08%, 250=3.00%, 500=0.06% 00:18:08.096 cpu : usr=99.01%, sys=0.16%, ctx=45, majf=0, minf=5595 00:18:08.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.096 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.097 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.097 00:18:08.097 Run status group 0 (all jobs): 00:18:08.097 READ: bw=15.9MiB/s (16.7MB/s), 8154KiB/s-8184KiB/s (8349kB/s-8380kB/s), io=510MiB (535MB), run=31893-32074msec 00:18:08.097 WRITE: bw=16.8MiB/s (17.6MB/s), 8577KiB/s-9.84MiB/s (8783kB/s-10.3MB/s), io=512MiB (537MB), run=26005-30564msec 00:18:08.097 ----------------------------------------------------- 00:18:08.097 Suppressions used: 00:18:08.097 count bytes template 00:18:08.097 2 10 /usr/src/fio/parse.c 00:18:08.097 2 192 /usr/src/fio/iolog.c 00:18:08.097 1 8 libtcmalloc_minimal.so 00:18:08.097 1 904 libcrypto.so 00:18:08.097 ----------------------------------------------------- 00:18:08.097 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:08.097 07:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:08.097 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:08.097 fio-3.35 00:18:08.097 Starting 1 thread 00:18:26.239 00:18:26.239 test: (groupid=0, jobs=1): err= 0: pid=75389: Wed Oct 9 07:58:25 2024 00:18:26.239 read: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(255MiB/10739msec) 00:18:26.239 slat (nsec): min=4561, max=50031, avg=7677.88, stdev=3131.43 00:18:26.239 clat (usec): min=753, max=39884, avg=21069.54, stdev=2328.39 00:18:26.239 lat (usec): min=767, max=39889, avg=21077.22, stdev=2328.01 00:18:26.239 clat percentiles (usec): 00:18:26.239 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19268], 20.00th=[19268], 00:18:26.239 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:18:26.239 | 70.00th=[21365], 80.00th=[22676], 90.00th=[23987], 95.00th=[25560], 00:18:26.239 | 99.00th=[29492], 99.50th=[30278], 99.90th=[34341], 99.95th=[35390], 00:18:26.239 | 99.99th=[39060] 00:18:26.239 write: IOPS=11.0k, BW=42.9MiB/s (44.9MB/s)(256MiB/5974msec); 0 zone resets 00:18:26.239 slat (usec): min=6, max=606, avg= 9.59, stdev= 5.85 00:18:26.239 clat (usec): min=666, max=76601, avg=11602.34, stdev=14613.38 00:18:26.239 lat (usec): min=673, max=76609, avg=11611.93, stdev=14613.36 00:18:26.239 clat percentiles (usec): 00:18:26.239 | 1.00th=[ 988], 5.00th=[ 1221], 10.00th=[ 1352], 20.00th=[ 1549], 00:18:26.239 | 30.00th=[ 1762], 40.00th=[ 2278], 50.00th=[ 7635], 60.00th=[ 8848], 00:18:26.239 | 70.00th=[10159], 80.00th=[11863], 90.00th=[42206], 95.00th=[45351], 00:18:26.239 | 99.00th=[51119], 99.50th=[54264], 99.90th=[57934], 99.95th=[62653], 00:18:26.239 | 99.99th=[73925] 00:18:26.239 bw ( KiB/s): min=35624, max=63480, per=99.57%, avg=43690.67, stdev=8852.00, samples=12 00:18:26.239 iops : min= 8906, max=15870, avg=10922.67, stdev=2213.00, samples=12 00:18:26.239 lat (usec) : 750=0.01%, 1000=0.53% 00:18:26.239 lat (msec) : 2=17.90%, 4=2.45%, 10=13.75%, 20=27.74%, 50=36.88% 00:18:26.239 lat (msec) : 100=0.73% 00:18:26.239 cpu : usr=98.75%, sys=0.31%, ctx=37, majf=0, minf=5565 00:18:26.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:26.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.239 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:26.239 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.239 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:26.239 00:18:26.239 Run status group 0 (all jobs): 00:18:26.239 READ: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=255MiB (267MB), run=10739-10739msec 00:18:26.240 WRITE: bw=42.9MiB/s (44.9MB/s), 42.9MiB/s-42.9MiB/s (44.9MB/s-44.9MB/s), io=256MiB (268MB), run=5974-5974msec 00:18:26.240 ----------------------------------------------------- 00:18:26.240 Suppressions used: 00:18:26.240 count bytes template 00:18:26.240 1 5 /usr/src/fio/parse.c 00:18:26.240 2 192 /usr/src/fio/iolog.c 00:18:26.240 1 8 libtcmalloc_minimal.so 00:18:26.240 1 904 libcrypto.so 00:18:26.240 ----------------------------------------------------- 00:18:26.240 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:26.240 Remove shared memory files 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58283 /dev/shm/spdk_tgt_trace.pid73583 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:26.240 ************************************ 00:18:26.240 END TEST ftl_fio_basic 00:18:26.240 ************************************ 00:18:26.240 00:18:26.240 real 1m18.164s 00:18:26.240 user 2m56.599s 00:18:26.240 sys 0m3.784s 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.240 07:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:26.240 07:58:26 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:26.240 07:58:26 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:26.240 07:58:26 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.240 07:58:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:26.240 ************************************ 00:18:26.240 START TEST ftl_bdevperf 00:18:26.240 ************************************ 00:18:26.240 07:58:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:26.240 * Looking for test storage... 00:18:26.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:26.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.240 --rc genhtml_branch_coverage=1 00:18:26.240 --rc genhtml_function_coverage=1 00:18:26.240 --rc genhtml_legend=1 00:18:26.240 --rc geninfo_all_blocks=1 00:18:26.240 --rc geninfo_unexecuted_blocks=1 00:18:26.240 00:18:26.240 ' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:26.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.240 --rc genhtml_branch_coverage=1 00:18:26.240 --rc genhtml_function_coverage=1 00:18:26.240 --rc genhtml_legend=1 00:18:26.240 --rc geninfo_all_blocks=1 00:18:26.240 --rc geninfo_unexecuted_blocks=1 00:18:26.240 00:18:26.240 ' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:26.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.240 --rc genhtml_branch_coverage=1 00:18:26.240 --rc genhtml_function_coverage=1 00:18:26.240 --rc genhtml_legend=1 00:18:26.240 --rc geninfo_all_blocks=1 00:18:26.240 --rc geninfo_unexecuted_blocks=1 00:18:26.240 00:18:26.240 ' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:26.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.240 --rc genhtml_branch_coverage=1 00:18:26.240 --rc genhtml_function_coverage=1 00:18:26.240 --rc genhtml_legend=1 00:18:26.240 --rc geninfo_all_blocks=1 00:18:26.240 --rc geninfo_unexecuted_blocks=1 00:18:26.240 00:18:26.240 ' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75662 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75662 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75662 ']' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.240 07:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:26.240 [2024-10-09 07:58:27.244548] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:18:26.240 [2024-10-09 07:58:27.244748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:18:26.240 [2024-10-09 07:58:27.416906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.240 [2024-10-09 07:58:27.614511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:26.499 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:26.756 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:27.014 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:27.014 { 00:18:27.014 "name": "nvme0n1", 00:18:27.014 "aliases": [ 00:18:27.014 "d9f0e198-7cb6-49e2-be93-030a686a18b3" 00:18:27.014 ], 00:18:27.014 "product_name": "NVMe disk", 00:18:27.014 "block_size": 4096, 00:18:27.014 "num_blocks": 1310720, 00:18:27.014 "uuid": "d9f0e198-7cb6-49e2-be93-030a686a18b3", 00:18:27.014 "numa_id": -1, 00:18:27.014 "assigned_rate_limits": { 00:18:27.014 "rw_ios_per_sec": 0, 00:18:27.014 "rw_mbytes_per_sec": 0, 00:18:27.014 "r_mbytes_per_sec": 0, 00:18:27.014 "w_mbytes_per_sec": 0 00:18:27.014 }, 00:18:27.014 "claimed": true, 00:18:27.014 "claim_type": "read_many_write_one", 00:18:27.014 "zoned": false, 00:18:27.014 "supported_io_types": { 00:18:27.014 "read": true, 00:18:27.014 "write": true, 00:18:27.014 "unmap": true, 00:18:27.014 "flush": true, 00:18:27.014 "reset": true, 00:18:27.014 "nvme_admin": true, 00:18:27.014 "nvme_io": true, 00:18:27.014 "nvme_io_md": false, 00:18:27.014 "write_zeroes": true, 00:18:27.014 "zcopy": false, 00:18:27.014 "get_zone_info": false, 00:18:27.014 "zone_management": false, 00:18:27.014 "zone_append": false, 00:18:27.014 "compare": true, 00:18:27.014 "compare_and_write": false, 00:18:27.014 "abort": true, 00:18:27.014 "seek_hole": false, 00:18:27.014 "seek_data": false, 00:18:27.014 "copy": true, 00:18:27.014 "nvme_iov_md": false 00:18:27.014 }, 00:18:27.014 "driver_specific": { 00:18:27.014 "nvme": [ 00:18:27.014 { 00:18:27.014 "pci_address": "0000:00:11.0", 00:18:27.014 "trid": { 00:18:27.014 "trtype": "PCIe", 00:18:27.015 "traddr": "0000:00:11.0" 00:18:27.015 }, 00:18:27.015 "ctrlr_data": { 00:18:27.015 "cntlid": 0, 00:18:27.015 "vendor_id": "0x1b36", 00:18:27.015 "model_number": "QEMU NVMe Ctrl", 00:18:27.015 "serial_number": "12341", 00:18:27.015 "firmware_revision": "8.0.0", 00:18:27.015 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:27.015 "oacs": { 00:18:27.015 "security": 0, 00:18:27.015 "format": 1, 00:18:27.015 "firmware": 0, 00:18:27.015 "ns_manage": 1 00:18:27.015 }, 00:18:27.015 "multi_ctrlr": false, 00:18:27.015 "ana_reporting": false 00:18:27.015 }, 00:18:27.015 "vs": { 00:18:27.015 "nvme_version": "1.4" 00:18:27.015 }, 00:18:27.015 "ns_data": { 00:18:27.015 "id": 1, 00:18:27.015 "can_share": false 00:18:27.015 } 00:18:27.015 } 00:18:27.015 ], 00:18:27.015 "mp_policy": "active_passive" 00:18:27.015 } 00:18:27.015 } 00:18:27.015 ]' 00:18:27.015 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:27.015 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:27.015 07:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:27.272 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:27.568 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=b2f69695-470c-4fa2-946c-44fb17b79817 00:18:27.568 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:27.568 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2f69695-470c-4fa2-946c-44fb17b79817 00:18:27.826 07:58:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:28.084 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f474699b-05bd-4e8d-8358-152f10c4e4b2 00:18:28.084 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f474699b-05bd-4e8d-8358-152f10c4e4b2 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:28.650 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:28.909 { 00:18:28.909 "name": "318a50d8-04c1-40f3-ada4-423a5e4043b1", 00:18:28.909 "aliases": [ 00:18:28.909 "lvs/nvme0n1p0" 00:18:28.909 ], 00:18:28.909 "product_name": "Logical Volume", 00:18:28.909 "block_size": 4096, 00:18:28.909 "num_blocks": 26476544, 00:18:28.909 "uuid": "318a50d8-04c1-40f3-ada4-423a5e4043b1", 00:18:28.909 "assigned_rate_limits": { 00:18:28.909 "rw_ios_per_sec": 0, 00:18:28.909 "rw_mbytes_per_sec": 0, 00:18:28.909 "r_mbytes_per_sec": 0, 00:18:28.909 "w_mbytes_per_sec": 0 00:18:28.909 }, 00:18:28.909 "claimed": false, 00:18:28.909 "zoned": false, 00:18:28.909 "supported_io_types": { 00:18:28.909 "read": true, 00:18:28.909 "write": true, 00:18:28.909 "unmap": true, 00:18:28.909 "flush": false, 00:18:28.909 "reset": true, 00:18:28.909 "nvme_admin": false, 00:18:28.909 "nvme_io": false, 00:18:28.909 "nvme_io_md": false, 00:18:28.909 "write_zeroes": true, 00:18:28.909 "zcopy": false, 00:18:28.909 "get_zone_info": false, 00:18:28.909 "zone_management": false, 00:18:28.909 "zone_append": false, 00:18:28.909 "compare": false, 00:18:28.909 "compare_and_write": false, 00:18:28.909 "abort": false, 00:18:28.909 "seek_hole": true, 00:18:28.909 "seek_data": true, 00:18:28.909 "copy": false, 00:18:28.909 "nvme_iov_md": false 00:18:28.909 }, 00:18:28.909 "driver_specific": { 00:18:28.909 "lvol": { 00:18:28.909 "lvol_store_uuid": "f474699b-05bd-4e8d-8358-152f10c4e4b2", 00:18:28.909 "base_bdev": "nvme0n1", 00:18:28.909 "thin_provision": true, 00:18:28.909 "num_allocated_clusters": 0, 00:18:28.909 "snapshot": false, 00:18:28.909 "clone": false, 00:18:28.909 "esnap_clone": false 00:18:28.909 } 00:18:28.909 } 00:18:28.909 } 00:18:28.909 ]' 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:28.909 07:58:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:29.167 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:29.426 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:29.426 { 00:18:29.426 "name": "318a50d8-04c1-40f3-ada4-423a5e4043b1", 00:18:29.426 "aliases": [ 00:18:29.426 "lvs/nvme0n1p0" 00:18:29.426 ], 00:18:29.426 "product_name": "Logical Volume", 00:18:29.426 "block_size": 4096, 00:18:29.426 "num_blocks": 26476544, 00:18:29.426 "uuid": "318a50d8-04c1-40f3-ada4-423a5e4043b1", 00:18:29.426 "assigned_rate_limits": { 00:18:29.426 "rw_ios_per_sec": 0, 00:18:29.426 "rw_mbytes_per_sec": 0, 00:18:29.426 "r_mbytes_per_sec": 0, 00:18:29.426 "w_mbytes_per_sec": 0 00:18:29.426 }, 00:18:29.426 "claimed": false, 00:18:29.426 "zoned": false, 00:18:29.426 "supported_io_types": { 00:18:29.426 "read": true, 00:18:29.426 "write": true, 00:18:29.426 "unmap": true, 00:18:29.426 "flush": false, 00:18:29.426 "reset": true, 00:18:29.426 "nvme_admin": false, 00:18:29.426 "nvme_io": false, 00:18:29.426 "nvme_io_md": false, 00:18:29.426 "write_zeroes": true, 00:18:29.426 "zcopy": false, 00:18:29.426 "get_zone_info": false, 00:18:29.426 "zone_management": false, 00:18:29.426 "zone_append": false, 00:18:29.426 "compare": false, 00:18:29.426 "compare_and_write": false, 00:18:29.426 "abort": false, 00:18:29.426 "seek_hole": true, 00:18:29.426 "seek_data": true, 00:18:29.426 "copy": false, 00:18:29.426 "nvme_iov_md": false 00:18:29.426 }, 00:18:29.426 "driver_specific": { 00:18:29.426 "lvol": { 00:18:29.426 "lvol_store_uuid": "f474699b-05bd-4e8d-8358-152f10c4e4b2", 00:18:29.426 "base_bdev": "nvme0n1", 00:18:29.426 "thin_provision": true, 00:18:29.426 "num_allocated_clusters": 0, 00:18:29.426 "snapshot": false, 00:18:29.426 "clone": false, 00:18:29.426 "esnap_clone": false 00:18:29.426 } 00:18:29.426 } 00:18:29.426 } 00:18:29.426 ]' 00:18:29.426 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:29.684 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:29.684 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:29.684 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:29.684 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:29.684 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:29.684 07:58:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:29.684 07:58:31 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:29.943 07:58:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:29.943 07:58:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:29.943 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:29.943 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:29.943 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:29.943 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:29.943 07:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 318a50d8-04c1-40f3-ada4-423a5e4043b1 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:30.509 { 00:18:30.509 "name": "318a50d8-04c1-40f3-ada4-423a5e4043b1", 00:18:30.509 "aliases": [ 00:18:30.509 "lvs/nvme0n1p0" 00:18:30.509 ], 00:18:30.509 "product_name": "Logical Volume", 00:18:30.509 "block_size": 4096, 00:18:30.509 "num_blocks": 26476544, 00:18:30.509 "uuid": "318a50d8-04c1-40f3-ada4-423a5e4043b1", 00:18:30.509 "assigned_rate_limits": { 00:18:30.509 "rw_ios_per_sec": 0, 00:18:30.509 "rw_mbytes_per_sec": 0, 00:18:30.509 "r_mbytes_per_sec": 0, 00:18:30.509 "w_mbytes_per_sec": 0 00:18:30.509 }, 00:18:30.509 "claimed": false, 00:18:30.509 "zoned": false, 00:18:30.509 "supported_io_types": { 00:18:30.509 "read": true, 00:18:30.509 "write": true, 00:18:30.509 "unmap": true, 00:18:30.509 "flush": false, 00:18:30.509 "reset": true, 00:18:30.509 "nvme_admin": false, 00:18:30.509 "nvme_io": false, 00:18:30.509 "nvme_io_md": false, 00:18:30.509 "write_zeroes": true, 00:18:30.509 "zcopy": false, 00:18:30.509 "get_zone_info": false, 00:18:30.509 "zone_management": false, 00:18:30.509 "zone_append": false, 00:18:30.509 "compare": false, 00:18:30.509 "compare_and_write": false, 00:18:30.509 "abort": false, 00:18:30.509 "seek_hole": true, 00:18:30.509 "seek_data": true, 00:18:30.509 "copy": false, 00:18:30.509 "nvme_iov_md": false 00:18:30.509 }, 00:18:30.509 "driver_specific": { 00:18:30.509 "lvol": { 00:18:30.509 "lvol_store_uuid": "f474699b-05bd-4e8d-8358-152f10c4e4b2", 00:18:30.509 "base_bdev": "nvme0n1", 00:18:30.509 "thin_provision": true, 00:18:30.509 "num_allocated_clusters": 0, 00:18:30.509 "snapshot": false, 00:18:30.509 "clone": false, 00:18:30.509 "esnap_clone": false 00:18:30.509 } 00:18:30.509 } 00:18:30.509 } 00:18:30.509 ]' 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:30.509 07:58:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 318a50d8-04c1-40f3-ada4-423a5e4043b1 -c nvc0n1p0 --l2p_dram_limit 20 00:18:30.768 [2024-10-09 07:58:32.636058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.636145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:30.768 [2024-10-09 07:58:32.636167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:30.768 [2024-10-09 07:58:32.636183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.636268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.636291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.768 [2024-10-09 07:58:32.636305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:30.768 [2024-10-09 07:58:32.636320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.636378] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:30.768 [2024-10-09 07:58:32.637395] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:30.768 [2024-10-09 07:58:32.637433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.637462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.768 [2024-10-09 07:58:32.637476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:18:30.768 [2024-10-09 07:58:32.637490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.637624] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 692bc142-5dbb-4114-b658-d7861726dde1 00:18:30.768 [2024-10-09 07:58:32.638740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.638783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:30.768 [2024-10-09 07:58:32.638807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:30.768 [2024-10-09 07:58:32.638821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.643980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.644046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.768 [2024-10-09 07:58:32.644068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.103 ms 00:18:30.768 [2024-10-09 07:58:32.644081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.644233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.644256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.768 [2024-10-09 07:58:32.644279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:30.768 [2024-10-09 07:58:32.644292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.644416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.644437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:30.768 [2024-10-09 07:58:32.644457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:30.768 [2024-10-09 07:58:32.644469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.644504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:30.768 [2024-10-09 07:58:32.649384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.649473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.768 [2024-10-09 07:58:32.649491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.892 ms 00:18:30.768 [2024-10-09 07:58:32.649506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.649566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.649585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:30.768 [2024-10-09 07:58:32.649599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:30.768 [2024-10-09 07:58:32.649613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.649670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:30.768 [2024-10-09 07:58:32.649839] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:30.768 [2024-10-09 07:58:32.649860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:30.768 [2024-10-09 07:58:32.649878] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:30.768 [2024-10-09 07:58:32.649895] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:30.768 [2024-10-09 07:58:32.649912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:30.768 [2024-10-09 07:58:32.649925] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:30.768 [2024-10-09 07:58:32.649942] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:30.768 [2024-10-09 07:58:32.649954] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:30.768 [2024-10-09 07:58:32.649968] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:30.768 [2024-10-09 07:58:32.649981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.649995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:30.768 [2024-10-09 07:58:32.650009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:30.768 [2024-10-09 07:58:32.650023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.650117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.768 [2024-10-09 07:58:32.650137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:30.768 [2024-10-09 07:58:32.650151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:30.768 [2024-10-09 07:58:32.650167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.768 [2024-10-09 07:58:32.650274] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:30.768 [2024-10-09 07:58:32.650306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:30.768 [2024-10-09 07:58:32.650321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.768 [2024-10-09 07:58:32.650352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.768 [2024-10-09 07:58:32.650368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:30.768 [2024-10-09 07:58:32.650392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:30.768 [2024-10-09 07:58:32.650404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:30.768 [2024-10-09 07:58:32.650418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:30.768 [2024-10-09 07:58:32.650430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:30.768 [2024-10-09 07:58:32.650444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.768 [2024-10-09 07:58:32.650456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:30.768 [2024-10-09 07:58:32.650486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:30.768 [2024-10-09 07:58:32.650498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.768 [2024-10-09 07:58:32.650513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:30.768 [2024-10-09 07:58:32.650525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:30.768 [2024-10-09 07:58:32.650541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.768 [2024-10-09 07:58:32.650553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:30.768 [2024-10-09 07:58:32.650569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:30.768 [2024-10-09 07:58:32.650580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.768 [2024-10-09 07:58:32.650597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:30.768 [2024-10-09 07:58:32.650609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:30.768 [2024-10-09 07:58:32.650623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.768 [2024-10-09 07:58:32.650635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:30.768 [2024-10-09 07:58:32.650649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:30.768 [2024-10-09 07:58:32.650661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.769 [2024-10-09 07:58:32.650674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:30.769 [2024-10-09 07:58:32.650686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:30.769 [2024-10-09 07:58:32.650700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.769 [2024-10-09 07:58:32.650712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:30.769 [2024-10-09 07:58:32.650726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:30.769 [2024-10-09 07:58:32.650737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.769 [2024-10-09 07:58:32.650753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:30.769 [2024-10-09 07:58:32.650765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:30.769 [2024-10-09 07:58:32.650778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.769 [2024-10-09 07:58:32.650791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:30.769 [2024-10-09 07:58:32.650804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:30.769 [2024-10-09 07:58:32.650816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.769 [2024-10-09 07:58:32.650829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:30.769 [2024-10-09 07:58:32.650841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:30.769 [2024-10-09 07:58:32.650855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.769 [2024-10-09 07:58:32.650866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:30.769 [2024-10-09 07:58:32.650880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:30.769 [2024-10-09 07:58:32.650891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.769 [2024-10-09 07:58:32.650905] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:30.769 [2024-10-09 07:58:32.650918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:30.769 [2024-10-09 07:58:32.650939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.769 [2024-10-09 07:58:32.650952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.769 [2024-10-09 07:58:32.650971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:30.769 [2024-10-09 07:58:32.650984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:30.769 [2024-10-09 07:58:32.651008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:30.769 [2024-10-09 07:58:32.651020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:30.769 [2024-10-09 07:58:32.651034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:30.769 [2024-10-09 07:58:32.651046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:30.769 [2024-10-09 07:58:32.651064] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:30.769 [2024-10-09 07:58:32.651082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.769 [2024-10-09 07:58:32.651097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:30.769 [2024-10-09 07:58:32.651110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:30.769 [2024-10-09 07:58:32.651123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:30.769 [2024-10-09 07:58:32.651135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:30.769 [2024-10-09 07:58:32.651149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:30.769 [2024-10-09 07:58:32.651161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:30.769 [2024-10-09 07:58:32.651175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:30.769 [2024-10-09 07:58:32.651187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:30.769 [2024-10-09 07:58:32.651203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:30.769 [2024-10-09 07:58:32.651215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:30.769 [2024-10-09 07:58:32.651229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:30.769 [2024-10-09 07:58:32.651241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:30.769 [2024-10-09 07:58:32.651254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:30.769 [2024-10-09 07:58:32.651267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:30.769 [2024-10-09 07:58:32.651283] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:30.769 [2024-10-09 07:58:32.651297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.769 [2024-10-09 07:58:32.651312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:30.769 [2024-10-09 07:58:32.651325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:30.769 [2024-10-09 07:58:32.651363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:30.769 [2024-10-09 07:58:32.651377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:30.769 [2024-10-09 07:58:32.651392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.769 [2024-10-09 07:58:32.651405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:30.769 [2024-10-09 07:58:32.651422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:18:30.769 [2024-10-09 07:58:32.651435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.769 [2024-10-09 07:58:32.651487] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:30.769 [2024-10-09 07:58:32.651510] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:33.299 [2024-10-09 07:58:34.756234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.756356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:33.299 [2024-10-09 07:58:34.756403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2104.750 ms 00:18:33.299 [2024-10-09 07:58:34.756430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.801493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.801572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.299 [2024-10-09 07:58:34.801598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.661 ms 00:18:33.299 [2024-10-09 07:58:34.801613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.801816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.801838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.299 [2024-10-09 07:58:34.801859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:33.299 [2024-10-09 07:58:34.801875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.842793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.842867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.299 [2024-10-09 07:58:34.842891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.855 ms 00:18:33.299 [2024-10-09 07:58:34.842911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.842981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.842998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.299 [2024-10-09 07:58:34.843015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:33.299 [2024-10-09 07:58:34.843028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.843473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.843507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.299 [2024-10-09 07:58:34.843525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:18:33.299 [2024-10-09 07:58:34.843538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.843705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.843729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.299 [2024-10-09 07:58:34.843748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:18:33.299 [2024-10-09 07:58:34.843761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.860384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.860457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.299 [2024-10-09 07:58:34.860482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.590 ms 00:18:33.299 [2024-10-09 07:58:34.860496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.874379] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:33.299 [2024-10-09 07:58:34.879688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.879789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.299 [2024-10-09 07:58:34.879811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.036 ms 00:18:33.299 [2024-10-09 07:58:34.879827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.939885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.939971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:33.299 [2024-10-09 07:58:34.939992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.994 ms 00:18:33.299 [2024-10-09 07:58:34.940008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.940258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.940296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.299 [2024-10-09 07:58:34.940313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:18:33.299 [2024-10-09 07:58:34.940327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.299 [2024-10-09 07:58:34.972212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.299 [2024-10-09 07:58:34.972285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:33.299 [2024-10-09 07:58:34.972306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.783 ms 00:18:33.300 [2024-10-09 07:58:34.972323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.003384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.003472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:33.300 [2024-10-09 07:58:35.003496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.975 ms 00:18:33.300 [2024-10-09 07:58:35.003511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.004294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.004349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.300 [2024-10-09 07:58:35.004369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:18:33.300 [2024-10-09 07:58:35.004388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.092481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.092587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:33.300 [2024-10-09 07:58:35.092611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.996 ms 00:18:33.300 [2024-10-09 07:58:35.092627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.125388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.125458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:33.300 [2024-10-09 07:58:35.125481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.641 ms 00:18:33.300 [2024-10-09 07:58:35.125497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.157928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.158012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:33.300 [2024-10-09 07:58:35.158033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.370 ms 00:18:33.300 [2024-10-09 07:58:35.158049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.190353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.190435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.300 [2024-10-09 07:58:35.190457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.233 ms 00:18:33.300 [2024-10-09 07:58:35.190473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.190544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.190572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.300 [2024-10-09 07:58:35.190587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:33.300 [2024-10-09 07:58:35.190601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.190743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.300 [2024-10-09 07:58:35.190769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.300 [2024-10-09 07:58:35.190784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:33.300 [2024-10-09 07:58:35.190799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.300 [2024-10-09 07:58:35.192092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2555.480 ms, result 0 00:18:33.300 { 00:18:33.300 "name": "ftl0", 00:18:33.300 "uuid": "692bc142-5dbb-4114-b658-d7861726dde1" 00:18:33.300 } 00:18:33.300 07:58:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:33.300 07:58:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:33.300 07:58:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:33.560 07:58:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:33.818 [2024-10-09 07:58:35.648439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:33.818 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:33.818 Zero copy mechanism will not be used. 00:18:33.818 Running I/O for 4 seconds... 00:18:35.686 1899.00 IOPS, 126.11 MiB/s [2024-10-09T07:58:39.072Z] 1982.50 IOPS, 131.65 MiB/s [2024-10-09T07:58:40.007Z] 1991.00 IOPS, 132.21 MiB/s [2024-10-09T07:58:40.007Z] 1965.00 IOPS, 130.49 MiB/s 00:18:37.995 Latency(us) 00:18:37.995 [2024-10-09T07:58:40.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.995 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:37.995 ftl0 : 4.00 1964.03 130.42 0.00 0.00 533.37 240.17 2785.28 00:18:37.995 [2024-10-09T07:58:40.007Z] =================================================================================================================== 00:18:37.995 [2024-10-09T07:58:40.007Z] Total : 1964.03 130.42 0.00 0.00 533.37 240.17 2785.28 00:18:37.995 [2024-10-09 07:58:39.660722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:37.995 { 00:18:37.995 "results": [ 00:18:37.995 { 00:18:37.995 "job": "ftl0", 00:18:37.995 "core_mask": "0x1", 00:18:37.995 "workload": "randwrite", 00:18:37.995 "status": "finished", 00:18:37.995 "queue_depth": 1, 00:18:37.995 "io_size": 69632, 00:18:37.995 "runtime": 4.002477, 00:18:37.995 "iops": 1964.0337720866353, 00:18:37.995 "mibps": 130.42411767762812, 00:18:37.995 "io_failed": 0, 00:18:37.995 "io_timeout": 0, 00:18:37.995 "avg_latency_us": 533.3715932509164, 00:18:37.995 "min_latency_us": 240.17454545454547, 00:18:37.995 "max_latency_us": 2785.28 00:18:37.995 } 00:18:37.995 ], 00:18:37.995 "core_count": 1 00:18:37.995 } 00:18:37.995 07:58:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:37.995 [2024-10-09 07:58:39.815751] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:37.995 Running I/O for 4 seconds... 00:18:39.864 7310.00 IOPS, 28.55 MiB/s [2024-10-09T07:58:43.251Z] 6849.50 IOPS, 26.76 MiB/s [2024-10-09T07:58:44.186Z] 6829.00 IOPS, 26.68 MiB/s [2024-10-09T07:58:44.186Z] 6970.75 IOPS, 27.23 MiB/s 00:18:42.174 Latency(us) 00:18:42.174 [2024-10-09T07:58:44.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.174 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.174 ftl0 : 4.02 6965.34 27.21 0.00 0.00 18328.00 372.36 41704.73 00:18:42.174 [2024-10-09T07:58:44.186Z] =================================================================================================================== 00:18:42.174 [2024-10-09T07:58:44.186Z] Total : 6965.34 27.21 0.00 0.00 18328.00 0.00 41704.73 00:18:42.174 [2024-10-09 07:58:43.847278] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:42.174 { 00:18:42.174 "results": [ 00:18:42.174 { 00:18:42.174 "job": "ftl0", 00:18:42.174 "core_mask": "0x1", 00:18:42.174 "workload": "randwrite", 00:18:42.174 "status": "finished", 00:18:42.174 "queue_depth": 128, 00:18:42.174 "io_size": 4096, 00:18:42.174 "runtime": 4.021196, 00:18:42.174 "iops": 6965.340659843489, 00:18:42.174 "mibps": 27.20836195251363, 00:18:42.174 "io_failed": 0, 00:18:42.174 "io_timeout": 0, 00:18:42.174 "avg_latency_us": 18327.996891388808, 00:18:42.174 "min_latency_us": 372.3636363636364, 00:18:42.174 "max_latency_us": 41704.72727272727 00:18:42.174 } 00:18:42.174 ], 00:18:42.174 "core_count": 1 00:18:42.174 } 00:18:42.174 07:58:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:42.174 [2024-10-09 07:58:44.022855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:42.174 Running I/O for 4 seconds... 00:18:44.044 5405.00 IOPS, 21.11 MiB/s [2024-10-09T07:58:47.431Z] 5265.50 IOPS, 20.57 MiB/s [2024-10-09T07:58:48.365Z] 5432.33 IOPS, 21.22 MiB/s [2024-10-09T07:58:48.365Z] 5377.50 IOPS, 21.01 MiB/s 00:18:46.353 Latency(us) 00:18:46.353 [2024-10-09T07:58:48.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.353 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.353 Verification LBA range: start 0x0 length 0x1400000 00:18:46.353 ftl0 : 4.01 5388.82 21.05 0.00 0.00 23665.73 377.95 50045.67 00:18:46.353 [2024-10-09T07:58:48.365Z] =================================================================================================================== 00:18:46.353 [2024-10-09T07:58:48.365Z] Total : 5388.82 21.05 0.00 0.00 23665.73 0.00 50045.67 00:18:46.353 [2024-10-09 07:58:48.057180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:46.354 { 00:18:46.354 "results": [ 00:18:46.354 { 00:18:46.354 "job": "ftl0", 00:18:46.354 "core_mask": "0x1", 00:18:46.354 "workload": "verify", 00:18:46.354 "status": "finished", 00:18:46.354 "verify_range": { 00:18:46.354 "start": 0, 00:18:46.354 "length": 20971520 00:18:46.354 }, 00:18:46.354 "queue_depth": 128, 00:18:46.354 "io_size": 4096, 00:18:46.354 "runtime": 4.014793, 00:18:46.354 "iops": 5388.820793500437, 00:18:46.354 "mibps": 21.05008122461108, 00:18:46.354 "io_failed": 0, 00:18:46.354 "io_timeout": 0, 00:18:46.354 "avg_latency_us": 23665.731575351387, 00:18:46.354 "min_latency_us": 377.9490909090909, 00:18:46.354 "max_latency_us": 50045.67272727273 00:18:46.354 } 00:18:46.354 ], 00:18:46.354 "core_count": 1 00:18:46.354 } 00:18:46.354 07:58:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:46.354 [2024-10-09 07:58:48.330691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.354 [2024-10-09 07:58:48.330779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:46.354 [2024-10-09 07:58:48.330803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.354 [2024-10-09 07:58:48.330820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.354 [2024-10-09 07:58:48.330856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.354 [2024-10-09 07:58:48.334344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.354 [2024-10-09 07:58:48.334397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:46.354 [2024-10-09 07:58:48.334420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.441 ms 00:18:46.354 [2024-10-09 07:58:48.334433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.354 [2024-10-09 07:58:48.336384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.354 [2024-10-09 07:58:48.336433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:46.354 [2024-10-09 07:58:48.336455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:18:46.354 [2024-10-09 07:58:48.336469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.612 [2024-10-09 07:58:48.520960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.612 [2024-10-09 07:58:48.521050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:46.612 [2024-10-09 07:58:48.521080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.446 ms 00:18:46.612 [2024-10-09 07:58:48.521095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.612 [2024-10-09 07:58:48.527831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.612 [2024-10-09 07:58:48.527874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:46.612 [2024-10-09 07:58:48.527893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.679 ms 00:18:46.612 [2024-10-09 07:58:48.527906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.612 [2024-10-09 07:58:48.559721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.612 [2024-10-09 07:58:48.559817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:46.612 [2024-10-09 07:58:48.559842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.706 ms 00:18:46.612 [2024-10-09 07:58:48.559856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.612 [2024-10-09 07:58:48.580013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.612 [2024-10-09 07:58:48.580099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:46.612 [2024-10-09 07:58:48.580125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.029 ms 00:18:46.612 [2024-10-09 07:58:48.580139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.612 [2024-10-09 07:58:48.580498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.612 [2024-10-09 07:58:48.580533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:46.612 [2024-10-09 07:58:48.580556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:46.612 [2024-10-09 07:58:48.580570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.612 [2024-10-09 07:58:48.615090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.612 [2024-10-09 07:58:48.615180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:46.612 [2024-10-09 07:58:48.615208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.476 ms 00:18:46.613 [2024-10-09 07:58:48.615222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.872 [2024-10-09 07:58:48.647485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.872 [2024-10-09 07:58:48.647555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:46.872 [2024-10-09 07:58:48.647588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.179 ms 00:18:46.872 [2024-10-09 07:58:48.647607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.872 [2024-10-09 07:58:48.679438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.872 [2024-10-09 07:58:48.679503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:46.872 [2024-10-09 07:58:48.679526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.755 ms 00:18:46.872 [2024-10-09 07:58:48.679540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.872 [2024-10-09 07:58:48.711699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.872 [2024-10-09 07:58:48.711775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:46.872 [2024-10-09 07:58:48.711802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.996 ms 00:18:46.872 [2024-10-09 07:58:48.711816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.872 [2024-10-09 07:58:48.711895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:46.872 [2024-10-09 07:58:48.711924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.711947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.711960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.711974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.711987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:46.872 [2024-10-09 07:58:48.712953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.712965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.712979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.712992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:46.873 [2024-10-09 07:58:48.713408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:46.873 [2024-10-09 07:58:48.713423] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 692bc142-5dbb-4114-b658-d7861726dde1 00:18:46.873 [2024-10-09 07:58:48.713436] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:46.873 [2024-10-09 07:58:48.713450] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:46.873 [2024-10-09 07:58:48.713462] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:46.873 [2024-10-09 07:58:48.713476] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:46.873 [2024-10-09 07:58:48.713488] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:46.873 [2024-10-09 07:58:48.713502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:46.873 [2024-10-09 07:58:48.713515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:46.873 [2024-10-09 07:58:48.713530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:46.873 [2024-10-09 07:58:48.713541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:46.873 [2024-10-09 07:58:48.713556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.873 [2024-10-09 07:58:48.713569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:46.873 [2024-10-09 07:58:48.713585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:18:46.873 [2024-10-09 07:58:48.713601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.873 [2024-10-09 07:58:48.730437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.873 [2024-10-09 07:58:48.730490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:46.873 [2024-10-09 07:58:48.730512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.744 ms 00:18:46.873 [2024-10-09 07:58:48.730526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.873 [2024-10-09 07:58:48.730972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.873 [2024-10-09 07:58:48.731006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:46.873 [2024-10-09 07:58:48.731024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:18:46.873 [2024-10-09 07:58:48.731037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.873 [2024-10-09 07:58:48.771816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.873 [2024-10-09 07:58:48.771890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.873 [2024-10-09 07:58:48.771917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.873 [2024-10-09 07:58:48.771930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.873 [2024-10-09 07:58:48.772024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.873 [2024-10-09 07:58:48.772040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.873 [2024-10-09 07:58:48.772059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.873 [2024-10-09 07:58:48.772071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.873 [2024-10-09 07:58:48.772205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.873 [2024-10-09 07:58:48.772226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.873 [2024-10-09 07:58:48.772243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.873 [2024-10-09 07:58:48.772255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.873 [2024-10-09 07:58:48.772283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.873 [2024-10-09 07:58:48.772297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.873 [2024-10-09 07:58:48.772312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.873 [2024-10-09 07:58:48.772327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.873 [2024-10-09 07:58:48.877547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.873 [2024-10-09 07:58:48.877628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.873 [2024-10-09 07:58:48.877655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.873 [2024-10-09 07:58:48.877668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.963275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.170 [2024-10-09 07:58:48.963372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.170 [2024-10-09 07:58:48.963402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.170 [2024-10-09 07:58:48.963416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.963569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.170 [2024-10-09 07:58:48.963605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.170 [2024-10-09 07:58:48.963623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.170 [2024-10-09 07:58:48.963636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.963710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.170 [2024-10-09 07:58:48.963729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.170 [2024-10-09 07:58:48.963745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.170 [2024-10-09 07:58:48.963757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.963891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.170 [2024-10-09 07:58:48.963912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.170 [2024-10-09 07:58:48.963933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.170 [2024-10-09 07:58:48.963946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.964002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.170 [2024-10-09 07:58:48.964021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.170 [2024-10-09 07:58:48.964037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.170 [2024-10-09 07:58:48.964049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.964102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.170 [2024-10-09 07:58:48.964117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.170 [2024-10-09 07:58:48.964132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.170 [2024-10-09 07:58:48.964144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.964202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.170 [2024-10-09 07:58:48.964219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.170 [2024-10-09 07:58:48.964234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.170 [2024-10-09 07:58:48.964246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.170 [2024-10-09 07:58:48.964430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 633.675 ms, result 0 00:18:47.170 true 00:18:47.170 07:58:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75662 00:18:47.170 07:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75662 ']' 00:18:47.170 07:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75662 00:18:47.170 07:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:18:47.170 07:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.170 07:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75662 00:18:47.170 killing process with pid 75662 00:18:47.170 Received shutdown signal, test time was about 4.000000 seconds 00:18:47.170 00:18:47.170 Latency(us) 00:18:47.170 [2024-10-09T07:58:49.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.170 [2024-10-09T07:58:49.182Z] =================================================================================================================== 00:18:47.170 [2024-10-09T07:58:49.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.170 07:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.170 07:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.170 07:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75662' 00:18:47.170 07:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75662 00:18:47.170 07:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75662 00:18:49.071 Remove shared memory files 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:49.071 ************************************ 00:18:49.071 END TEST ftl_bdevperf 00:18:49.071 ************************************ 00:18:49.071 00:18:49.071 real 0m23.853s 00:18:49.071 user 0m28.319s 00:18:49.071 sys 0m1.125s 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.071 07:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.071 07:58:50 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:49.071 07:58:50 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:49.071 07:58:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.071 07:58:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:49.071 ************************************ 00:18:49.071 START TEST ftl_trim 00:18:49.071 ************************************ 00:18:49.071 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:49.071 * Looking for test storage... 00:18:49.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.071 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:49.071 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:18:49.071 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:49.071 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.071 07:58:51 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:49.071 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.071 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:49.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.071 --rc genhtml_branch_coverage=1 00:18:49.071 --rc genhtml_function_coverage=1 00:18:49.071 --rc genhtml_legend=1 00:18:49.071 --rc geninfo_all_blocks=1 00:18:49.071 --rc geninfo_unexecuted_blocks=1 00:18:49.071 00:18:49.071 ' 00:18:49.071 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:49.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.071 --rc genhtml_branch_coverage=1 00:18:49.071 --rc genhtml_function_coverage=1 00:18:49.071 --rc genhtml_legend=1 00:18:49.071 --rc geninfo_all_blocks=1 00:18:49.071 --rc geninfo_unexecuted_blocks=1 00:18:49.071 00:18:49.071 ' 00:18:49.071 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:49.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.071 --rc genhtml_branch_coverage=1 00:18:49.071 --rc genhtml_function_coverage=1 00:18:49.071 --rc genhtml_legend=1 00:18:49.071 --rc geninfo_all_blocks=1 00:18:49.071 --rc geninfo_unexecuted_blocks=1 00:18:49.071 00:18:49.071 ' 00:18:49.071 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:49.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.071 --rc genhtml_branch_coverage=1 00:18:49.071 --rc genhtml_function_coverage=1 00:18:49.071 --rc genhtml_legend=1 00:18:49.071 --rc geninfo_all_blocks=1 00:18:49.071 --rc geninfo_unexecuted_blocks=1 00:18:49.071 00:18:49.071 ' 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.071 07:58:51 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76015 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:49.072 07:58:51 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76015 00:18:49.072 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76015 ']' 00:18:49.072 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.072 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.072 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.072 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.072 07:58:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:49.330 [2024-10-09 07:58:51.225701] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:18:49.330 [2024-10-09 07:58:51.225975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76015 ] 00:18:49.588 [2024-10-09 07:58:51.422254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:49.845 [2024-10-09 07:58:51.636310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.845 [2024-10-09 07:58:51.636404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.845 [2024-10-09 07:58:51.636423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.411 07:58:52 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.411 07:58:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:50.411 07:58:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:50.411 07:58:52 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:50.411 07:58:52 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:50.411 07:58:52 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:50.411 07:58:52 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:50.411 07:58:52 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:50.982 07:58:52 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:50.982 07:58:52 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:50.982 07:58:52 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:50.982 07:58:52 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:50.982 07:58:52 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:50.982 07:58:52 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:50.982 07:58:52 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:50.982 07:58:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:51.241 07:58:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:51.241 { 00:18:51.241 "name": "nvme0n1", 00:18:51.241 "aliases": [ 00:18:51.241 "78d37017-d1f4-43cc-8728-9a30d92192d5" 00:18:51.241 ], 00:18:51.241 "product_name": "NVMe disk", 00:18:51.241 "block_size": 4096, 00:18:51.241 "num_blocks": 1310720, 00:18:51.241 "uuid": "78d37017-d1f4-43cc-8728-9a30d92192d5", 00:18:51.241 "numa_id": -1, 00:18:51.241 "assigned_rate_limits": { 00:18:51.241 "rw_ios_per_sec": 0, 00:18:51.241 "rw_mbytes_per_sec": 0, 00:18:51.241 "r_mbytes_per_sec": 0, 00:18:51.241 "w_mbytes_per_sec": 0 00:18:51.241 }, 00:18:51.241 "claimed": true, 00:18:51.241 "claim_type": "read_many_write_one", 00:18:51.241 "zoned": false, 00:18:51.241 "supported_io_types": { 00:18:51.241 "read": true, 00:18:51.241 "write": true, 00:18:51.241 "unmap": true, 00:18:51.241 "flush": true, 00:18:51.241 "reset": true, 00:18:51.241 "nvme_admin": true, 00:18:51.241 "nvme_io": true, 00:18:51.241 "nvme_io_md": false, 00:18:51.241 "write_zeroes": true, 00:18:51.241 "zcopy": false, 00:18:51.241 "get_zone_info": false, 00:18:51.241 "zone_management": false, 00:18:51.241 "zone_append": false, 00:18:51.241 "compare": true, 00:18:51.241 "compare_and_write": false, 00:18:51.241 "abort": true, 00:18:51.241 "seek_hole": false, 00:18:51.241 "seek_data": false, 00:18:51.241 "copy": true, 00:18:51.241 "nvme_iov_md": false 00:18:51.241 }, 00:18:51.241 "driver_specific": { 00:18:51.241 "nvme": [ 00:18:51.241 { 00:18:51.241 "pci_address": "0000:00:11.0", 00:18:51.241 "trid": { 00:18:51.241 "trtype": "PCIe", 00:18:51.241 "traddr": "0000:00:11.0" 00:18:51.241 }, 00:18:51.241 "ctrlr_data": { 00:18:51.241 "cntlid": 0, 00:18:51.241 "vendor_id": "0x1b36", 00:18:51.241 "model_number": "QEMU NVMe Ctrl", 00:18:51.241 "serial_number": "12341", 00:18:51.241 "firmware_revision": "8.0.0", 00:18:51.241 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:51.241 "oacs": { 00:18:51.241 "security": 0, 00:18:51.241 "format": 1, 00:18:51.241 "firmware": 0, 00:18:51.241 "ns_manage": 1 00:18:51.241 }, 00:18:51.241 "multi_ctrlr": false, 00:18:51.241 "ana_reporting": false 00:18:51.241 }, 00:18:51.241 "vs": { 00:18:51.241 "nvme_version": "1.4" 00:18:51.241 }, 00:18:51.241 "ns_data": { 00:18:51.241 "id": 1, 00:18:51.241 "can_share": false 00:18:51.241 } 00:18:51.241 } 00:18:51.241 ], 00:18:51.241 "mp_policy": "active_passive" 00:18:51.241 } 00:18:51.241 } 00:18:51.241 ]' 00:18:51.241 07:58:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:51.241 07:58:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:51.241 07:58:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:51.242 07:58:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:51.242 07:58:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:51.242 07:58:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:51.242 07:58:53 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:51.242 07:58:53 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:51.242 07:58:53 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:51.242 07:58:53 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:51.242 07:58:53 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:51.500 07:58:53 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f474699b-05bd-4e8d-8358-152f10c4e4b2 00:18:51.500 07:58:53 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:51.500 07:58:53 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f474699b-05bd-4e8d-8358-152f10c4e4b2 00:18:52.067 07:58:53 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:52.325 07:58:54 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=0a866b3d-46a8-4eb4-9ff3-8f9746b1ad63 00:18:52.325 07:58:54 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0a866b3d-46a8-4eb4-9ff3-8f9746b1ad63 00:18:52.583 07:58:54 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=562782ae-f8c9-469b-aa69-481cfc986f61 00:18:52.583 07:58:54 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 562782ae-f8c9-469b-aa69-481cfc986f61 00:18:52.583 07:58:54 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:52.583 07:58:54 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:52.583 07:58:54 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=562782ae-f8c9-469b-aa69-481cfc986f61 00:18:52.583 07:58:54 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:52.583 07:58:54 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 562782ae-f8c9-469b-aa69-481cfc986f61 00:18:52.583 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=562782ae-f8c9-469b-aa69-481cfc986f61 00:18:52.583 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:52.583 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:52.583 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:52.583 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 562782ae-f8c9-469b-aa69-481cfc986f61 00:18:52.842 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:52.842 { 00:18:52.842 "name": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:52.842 "aliases": [ 00:18:52.842 "lvs/nvme0n1p0" 00:18:52.842 ], 00:18:52.842 "product_name": "Logical Volume", 00:18:52.842 "block_size": 4096, 00:18:52.842 "num_blocks": 26476544, 00:18:52.842 "uuid": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:52.842 "assigned_rate_limits": { 00:18:52.842 "rw_ios_per_sec": 0, 00:18:52.842 "rw_mbytes_per_sec": 0, 00:18:52.842 "r_mbytes_per_sec": 0, 00:18:52.842 "w_mbytes_per_sec": 0 00:18:52.842 }, 00:18:52.842 "claimed": false, 00:18:52.842 "zoned": false, 00:18:52.842 "supported_io_types": { 00:18:52.842 "read": true, 00:18:52.842 "write": true, 00:18:52.842 "unmap": true, 00:18:52.842 "flush": false, 00:18:52.842 "reset": true, 00:18:52.842 "nvme_admin": false, 00:18:52.842 "nvme_io": false, 00:18:52.842 "nvme_io_md": false, 00:18:52.842 "write_zeroes": true, 00:18:52.842 "zcopy": false, 00:18:52.842 "get_zone_info": false, 00:18:52.842 "zone_management": false, 00:18:52.842 "zone_append": false, 00:18:52.842 "compare": false, 00:18:52.842 "compare_and_write": false, 00:18:52.842 "abort": false, 00:18:52.842 "seek_hole": true, 00:18:52.842 "seek_data": true, 00:18:52.842 "copy": false, 00:18:52.842 "nvme_iov_md": false 00:18:52.842 }, 00:18:52.842 "driver_specific": { 00:18:52.842 "lvol": { 00:18:52.842 "lvol_store_uuid": "0a866b3d-46a8-4eb4-9ff3-8f9746b1ad63", 00:18:52.842 "base_bdev": "nvme0n1", 00:18:52.842 "thin_provision": true, 00:18:52.842 "num_allocated_clusters": 0, 00:18:52.842 "snapshot": false, 00:18:52.842 "clone": false, 00:18:52.842 "esnap_clone": false 00:18:52.842 } 00:18:52.842 } 00:18:52.842 } 00:18:52.842 ]' 00:18:52.842 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:52.842 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:52.842 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:52.842 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:52.842 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:52.842 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:52.842 07:58:54 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:52.842 07:58:54 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:52.842 07:58:54 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:53.409 07:58:55 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:53.409 07:58:55 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:53.409 07:58:55 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 562782ae-f8c9-469b-aa69-481cfc986f61 00:18:53.409 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=562782ae-f8c9-469b-aa69-481cfc986f61 00:18:53.409 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:53.409 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:53.409 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:53.409 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 562782ae-f8c9-469b-aa69-481cfc986f61 00:18:53.409 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:53.409 { 00:18:53.409 "name": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:53.409 "aliases": [ 00:18:53.409 "lvs/nvme0n1p0" 00:18:53.409 ], 00:18:53.409 "product_name": "Logical Volume", 00:18:53.409 "block_size": 4096, 00:18:53.409 "num_blocks": 26476544, 00:18:53.409 "uuid": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:53.409 "assigned_rate_limits": { 00:18:53.409 "rw_ios_per_sec": 0, 00:18:53.409 "rw_mbytes_per_sec": 0, 00:18:53.409 "r_mbytes_per_sec": 0, 00:18:53.409 "w_mbytes_per_sec": 0 00:18:53.409 }, 00:18:53.410 "claimed": false, 00:18:53.410 "zoned": false, 00:18:53.410 "supported_io_types": { 00:18:53.410 "read": true, 00:18:53.410 "write": true, 00:18:53.410 "unmap": true, 00:18:53.410 "flush": false, 00:18:53.410 "reset": true, 00:18:53.410 "nvme_admin": false, 00:18:53.410 "nvme_io": false, 00:18:53.410 "nvme_io_md": false, 00:18:53.410 "write_zeroes": true, 00:18:53.410 "zcopy": false, 00:18:53.410 "get_zone_info": false, 00:18:53.410 "zone_management": false, 00:18:53.410 "zone_append": false, 00:18:53.410 "compare": false, 00:18:53.410 "compare_and_write": false, 00:18:53.410 "abort": false, 00:18:53.410 "seek_hole": true, 00:18:53.410 "seek_data": true, 00:18:53.410 "copy": false, 00:18:53.410 "nvme_iov_md": false 00:18:53.410 }, 00:18:53.410 "driver_specific": { 00:18:53.410 "lvol": { 00:18:53.410 "lvol_store_uuid": "0a866b3d-46a8-4eb4-9ff3-8f9746b1ad63", 00:18:53.410 "base_bdev": "nvme0n1", 00:18:53.410 "thin_provision": true, 00:18:53.410 "num_allocated_clusters": 0, 00:18:53.410 "snapshot": false, 00:18:53.410 "clone": false, 00:18:53.410 "esnap_clone": false 00:18:53.410 } 00:18:53.410 } 00:18:53.410 } 00:18:53.410 ]' 00:18:53.410 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:53.669 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:53.669 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:53.669 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:53.669 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:53.669 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:53.669 07:58:55 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:53.669 07:58:55 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:53.927 07:58:55 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:53.927 07:58:55 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:53.927 07:58:55 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 562782ae-f8c9-469b-aa69-481cfc986f61 00:18:53.927 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=562782ae-f8c9-469b-aa69-481cfc986f61 00:18:53.927 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:53.927 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:53.927 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:53.927 07:58:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 562782ae-f8c9-469b-aa69-481cfc986f61 00:18:54.531 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:54.532 { 00:18:54.532 "name": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:54.532 "aliases": [ 00:18:54.532 "lvs/nvme0n1p0" 00:18:54.532 ], 00:18:54.532 "product_name": "Logical Volume", 00:18:54.532 "block_size": 4096, 00:18:54.532 "num_blocks": 26476544, 00:18:54.532 "uuid": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:54.532 "assigned_rate_limits": { 00:18:54.532 "rw_ios_per_sec": 0, 00:18:54.532 "rw_mbytes_per_sec": 0, 00:18:54.532 "r_mbytes_per_sec": 0, 00:18:54.532 "w_mbytes_per_sec": 0 00:18:54.532 }, 00:18:54.532 "claimed": false, 00:18:54.532 "zoned": false, 00:18:54.532 "supported_io_types": { 00:18:54.532 "read": true, 00:18:54.532 "write": true, 00:18:54.532 "unmap": true, 00:18:54.532 "flush": false, 00:18:54.532 "reset": true, 00:18:54.532 "nvme_admin": false, 00:18:54.532 "nvme_io": false, 00:18:54.532 "nvme_io_md": false, 00:18:54.532 "write_zeroes": true, 00:18:54.532 "zcopy": false, 00:18:54.532 "get_zone_info": false, 00:18:54.532 "zone_management": false, 00:18:54.532 "zone_append": false, 00:18:54.532 "compare": false, 00:18:54.532 "compare_and_write": false, 00:18:54.532 "abort": false, 00:18:54.532 "seek_hole": true, 00:18:54.532 "seek_data": true, 00:18:54.532 "copy": false, 00:18:54.532 "nvme_iov_md": false 00:18:54.532 }, 00:18:54.532 "driver_specific": { 00:18:54.532 "lvol": { 00:18:54.532 "lvol_store_uuid": "0a866b3d-46a8-4eb4-9ff3-8f9746b1ad63", 00:18:54.532 "base_bdev": "nvme0n1", 00:18:54.532 "thin_provision": true, 00:18:54.532 "num_allocated_clusters": 0, 00:18:54.532 "snapshot": false, 00:18:54.532 "clone": false, 00:18:54.532 "esnap_clone": false 00:18:54.532 } 00:18:54.532 } 00:18:54.532 } 00:18:54.532 ]' 00:18:54.532 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:54.532 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:54.532 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:54.532 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:54.532 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:54.532 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:54.532 07:58:56 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:54.532 07:58:56 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 562782ae-f8c9-469b-aa69-481cfc986f61 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:54.791 [2024-10-09 07:58:56.695985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.696050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.791 [2024-10-09 07:58:56.696074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:54.791 [2024-10-09 07:58:56.696093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.699763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.699828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.791 [2024-10-09 07:58:56.699864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:18:54.791 [2024-10-09 07:58:56.699887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.700165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.791 [2024-10-09 07:58:56.701216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.791 [2024-10-09 07:58:56.701284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.701311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.791 [2024-10-09 07:58:56.701362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:18:54.791 [2024-10-09 07:58:56.701390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.701700] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2a716a08-2588-4711-9bfe-c66b02b59b71 00:18:54.791 [2024-10-09 07:58:56.702849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.702893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:54.791 [2024-10-09 07:58:56.702911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:54.791 [2024-10-09 07:58:56.702926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.707715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.707786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.791 [2024-10-09 07:58:56.707805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.693 ms 00:18:54.791 [2024-10-09 07:58:56.707819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.708035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.708071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.791 [2024-10-09 07:58:56.708086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:54.791 [2024-10-09 07:58:56.708104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.708157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.708177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.791 [2024-10-09 07:58:56.708191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:54.791 [2024-10-09 07:58:56.708205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.708250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:54.791 [2024-10-09 07:58:56.713294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.713392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.791 [2024-10-09 07:58:56.713416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.048 ms 00:18:54.791 [2024-10-09 07:58:56.713429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.713575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.713603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.791 [2024-10-09 07:58:56.713631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:54.791 [2024-10-09 07:58:56.713654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.713695] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:54.791 [2024-10-09 07:58:56.713855] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.791 [2024-10-09 07:58:56.713894] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.791 [2024-10-09 07:58:56.713931] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:54.791 [2024-10-09 07:58:56.713954] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.791 [2024-10-09 07:58:56.713968] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.791 [2024-10-09 07:58:56.713982] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:54.791 [2024-10-09 07:58:56.713994] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.791 [2024-10-09 07:58:56.714007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.791 [2024-10-09 07:58:56.714019] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.791 [2024-10-09 07:58:56.714034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.714046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.791 [2024-10-09 07:58:56.714061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:18:54.791 [2024-10-09 07:58:56.714072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.714182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.791 [2024-10-09 07:58:56.714202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.791 [2024-10-09 07:58:56.714217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:54.791 [2024-10-09 07:58:56.714229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.791 [2024-10-09 07:58:56.714379] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.791 [2024-10-09 07:58:56.714407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.791 [2024-10-09 07:58:56.714424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.792 [2024-10-09 07:58:56.714462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.792 [2024-10-09 07:58:56.714499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.792 [2024-10-09 07:58:56.714522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.792 [2024-10-09 07:58:56.714534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:54.792 [2024-10-09 07:58:56.714547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.792 [2024-10-09 07:58:56.714558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.792 [2024-10-09 07:58:56.714571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:54.792 [2024-10-09 07:58:56.714581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.792 [2024-10-09 07:58:56.714629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.792 [2024-10-09 07:58:56.714669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.792 [2024-10-09 07:58:56.714706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.792 [2024-10-09 07:58:56.714742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.792 [2024-10-09 07:58:56.714777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.792 [2024-10-09 07:58:56.714815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.792 [2024-10-09 07:58:56.714839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.792 [2024-10-09 07:58:56.714850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:54.792 [2024-10-09 07:58:56.714862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.792 [2024-10-09 07:58:56.714873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.792 [2024-10-09 07:58:56.714886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:54.792 [2024-10-09 07:58:56.714897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.792 [2024-10-09 07:58:56.714921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:54.792 [2024-10-09 07:58:56.714933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.792 [2024-10-09 07:58:56.714944] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.792 [2024-10-09 07:58:56.714958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.792 [2024-10-09 07:58:56.714974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.792 [2024-10-09 07:58:56.714987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.792 [2024-10-09 07:58:56.715000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.792 [2024-10-09 07:58:56.715017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.792 [2024-10-09 07:58:56.715029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.792 [2024-10-09 07:58:56.715042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.792 [2024-10-09 07:58:56.715052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.792 [2024-10-09 07:58:56.715065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.792 [2024-10-09 07:58:56.715081] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.792 [2024-10-09 07:58:56.715099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.792 [2024-10-09 07:58:56.715113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:54.792 [2024-10-09 07:58:56.715126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:54.792 [2024-10-09 07:58:56.715139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:54.792 [2024-10-09 07:58:56.715153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:54.792 [2024-10-09 07:58:56.715165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:54.792 [2024-10-09 07:58:56.715178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:54.792 [2024-10-09 07:58:56.715190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:54.792 [2024-10-09 07:58:56.715204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:54.792 [2024-10-09 07:58:56.715216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:54.792 [2024-10-09 07:58:56.715231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:54.792 [2024-10-09 07:58:56.715243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:54.792 [2024-10-09 07:58:56.715256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:54.792 [2024-10-09 07:58:56.715268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:54.792 [2024-10-09 07:58:56.715282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:54.792 [2024-10-09 07:58:56.715295] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.792 [2024-10-09 07:58:56.715310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.792 [2024-10-09 07:58:56.715323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.792 [2024-10-09 07:58:56.715355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.792 [2024-10-09 07:58:56.715369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.792 [2024-10-09 07:58:56.715383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.792 [2024-10-09 07:58:56.715397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.792 [2024-10-09 07:58:56.715411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.792 [2024-10-09 07:58:56.715427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:18:54.792 [2024-10-09 07:58:56.715440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.792 [2024-10-09 07:58:56.715534] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:54.792 [2024-10-09 07:58:56.715559] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:56.692 [2024-10-09 07:58:58.694653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.692 [2024-10-09 07:58:58.694731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:56.692 [2024-10-09 07:58:58.694753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1979.128 ms 00:18:56.692 [2024-10-09 07:58:58.694768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.737262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.737358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.950 [2024-10-09 07:58:58.737388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.138 ms 00:18:56.950 [2024-10-09 07:58:58.737408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.737691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.737740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:56.950 [2024-10-09 07:58:58.737761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:56.950 [2024-10-09 07:58:58.737782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.779729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.779797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.950 [2024-10-09 07:58:58.779818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.898 ms 00:18:56.950 [2024-10-09 07:58:58.779833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.779966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.779992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.950 [2024-10-09 07:58:58.780010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:56.950 [2024-10-09 07:58:58.780023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.780381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.780417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.950 [2024-10-09 07:58:58.780433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:56.950 [2024-10-09 07:58:58.780447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.780597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.780621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.950 [2024-10-09 07:58:58.780635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:56.950 [2024-10-09 07:58:58.780655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.798900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.798967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.950 [2024-10-09 07:58:58.798991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.204 ms 00:18:56.950 [2024-10-09 07:58:58.799006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.812653] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:56.950 [2024-10-09 07:58:58.826583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.826655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:56.950 [2024-10-09 07:58:58.826679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.400 ms 00:18:56.950 [2024-10-09 07:58:58.826692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.887357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.887429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:56.950 [2024-10-09 07:58:58.887453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.525 ms 00:18:56.950 [2024-10-09 07:58:58.887467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.887781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.887818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:56.950 [2024-10-09 07:58:58.887843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:18:56.950 [2024-10-09 07:58:58.887856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.922292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.922373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:56.950 [2024-10-09 07:58:58.922399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.382 ms 00:18:56.950 [2024-10-09 07:58:58.922413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.954827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.954893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:56.950 [2024-10-09 07:58:58.954917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.246 ms 00:18:56.950 [2024-10-09 07:58:58.954930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.950 [2024-10-09 07:58:58.955792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.950 [2024-10-09 07:58:58.955834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:56.950 [2024-10-09 07:58:58.955853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:18:56.950 [2024-10-09 07:58:58.955865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.209 [2024-10-09 07:58:59.041110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.209 [2024-10-09 07:58:59.041182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:57.209 [2024-10-09 07:58:59.041210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.183 ms 00:18:57.209 [2024-10-09 07:58:59.041223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.209 [2024-10-09 07:58:59.073970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.209 [2024-10-09 07:58:59.074039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:57.209 [2024-10-09 07:58:59.074063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.554 ms 00:18:57.209 [2024-10-09 07:58:59.074076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.209 [2024-10-09 07:58:59.106046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.209 [2024-10-09 07:58:59.106107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:57.209 [2024-10-09 07:58:59.106129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.846 ms 00:18:57.209 [2024-10-09 07:58:59.106141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.209 [2024-10-09 07:58:59.137965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.209 [2024-10-09 07:58:59.138024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.209 [2024-10-09 07:58:59.138046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.713 ms 00:18:57.209 [2024-10-09 07:58:59.138059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.209 [2024-10-09 07:58:59.138185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.209 [2024-10-09 07:58:59.138206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.209 [2024-10-09 07:58:59.138226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:57.209 [2024-10-09 07:58:59.138259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.209 [2024-10-09 07:58:59.138381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.209 [2024-10-09 07:58:59.138401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.209 [2024-10-09 07:58:59.138417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:57.209 [2024-10-09 07:58:59.138431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.209 [2024-10-09 07:58:59.139427] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:57.209 [2024-10-09 07:58:59.143627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2443.109 ms, result 0 00:18:57.209 [2024-10-09 07:58:59.144425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.209 { 00:18:57.209 "name": "ftl0", 00:18:57.209 "uuid": "2a716a08-2588-4711-9bfe-c66b02b59b71" 00:18:57.209 } 00:18:57.209 07:58:59 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:57.209 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:57.209 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:57.209 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:57.209 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:57.209 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:57.209 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:57.468 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:58.035 [ 00:18:58.035 { 00:18:58.035 "name": "ftl0", 00:18:58.035 "aliases": [ 00:18:58.035 "2a716a08-2588-4711-9bfe-c66b02b59b71" 00:18:58.035 ], 00:18:58.035 "product_name": "FTL disk", 00:18:58.035 "block_size": 4096, 00:18:58.035 "num_blocks": 23592960, 00:18:58.035 "uuid": "2a716a08-2588-4711-9bfe-c66b02b59b71", 00:18:58.035 "assigned_rate_limits": { 00:18:58.035 "rw_ios_per_sec": 0, 00:18:58.035 "rw_mbytes_per_sec": 0, 00:18:58.035 "r_mbytes_per_sec": 0, 00:18:58.035 "w_mbytes_per_sec": 0 00:18:58.035 }, 00:18:58.035 "claimed": false, 00:18:58.035 "zoned": false, 00:18:58.035 "supported_io_types": { 00:18:58.035 "read": true, 00:18:58.035 "write": true, 00:18:58.035 "unmap": true, 00:18:58.035 "flush": true, 00:18:58.035 "reset": false, 00:18:58.035 "nvme_admin": false, 00:18:58.035 "nvme_io": false, 00:18:58.035 "nvme_io_md": false, 00:18:58.035 "write_zeroes": true, 00:18:58.035 "zcopy": false, 00:18:58.035 "get_zone_info": false, 00:18:58.035 "zone_management": false, 00:18:58.035 "zone_append": false, 00:18:58.035 "compare": false, 00:18:58.035 "compare_and_write": false, 00:18:58.035 "abort": false, 00:18:58.035 "seek_hole": false, 00:18:58.035 "seek_data": false, 00:18:58.035 "copy": false, 00:18:58.035 "nvme_iov_md": false 00:18:58.035 }, 00:18:58.035 "driver_specific": { 00:18:58.035 "ftl": { 00:18:58.035 "base_bdev": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:58.035 "cache": "nvc0n1p0" 00:18:58.035 } 00:18:58.035 } 00:18:58.035 } 00:18:58.035 ] 00:18:58.035 07:58:59 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:58.035 07:58:59 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:58.035 07:58:59 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:58.035 07:59:00 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:58.294 07:59:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:58.553 07:59:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:58.553 { 00:18:58.553 "name": "ftl0", 00:18:58.553 "aliases": [ 00:18:58.553 "2a716a08-2588-4711-9bfe-c66b02b59b71" 00:18:58.553 ], 00:18:58.553 "product_name": "FTL disk", 00:18:58.553 "block_size": 4096, 00:18:58.553 "num_blocks": 23592960, 00:18:58.553 "uuid": "2a716a08-2588-4711-9bfe-c66b02b59b71", 00:18:58.553 "assigned_rate_limits": { 00:18:58.553 "rw_ios_per_sec": 0, 00:18:58.553 "rw_mbytes_per_sec": 0, 00:18:58.553 "r_mbytes_per_sec": 0, 00:18:58.553 "w_mbytes_per_sec": 0 00:18:58.553 }, 00:18:58.553 "claimed": false, 00:18:58.553 "zoned": false, 00:18:58.553 "supported_io_types": { 00:18:58.553 "read": true, 00:18:58.553 "write": true, 00:18:58.553 "unmap": true, 00:18:58.553 "flush": true, 00:18:58.553 "reset": false, 00:18:58.553 "nvme_admin": false, 00:18:58.553 "nvme_io": false, 00:18:58.553 "nvme_io_md": false, 00:18:58.553 "write_zeroes": true, 00:18:58.553 "zcopy": false, 00:18:58.553 "get_zone_info": false, 00:18:58.553 "zone_management": false, 00:18:58.553 "zone_append": false, 00:18:58.553 "compare": false, 00:18:58.553 "compare_and_write": false, 00:18:58.553 "abort": false, 00:18:58.553 "seek_hole": false, 00:18:58.553 "seek_data": false, 00:18:58.553 "copy": false, 00:18:58.553 "nvme_iov_md": false 00:18:58.553 }, 00:18:58.553 "driver_specific": { 00:18:58.553 "ftl": { 00:18:58.553 "base_bdev": "562782ae-f8c9-469b-aa69-481cfc986f61", 00:18:58.553 "cache": "nvc0n1p0" 00:18:58.553 } 00:18:58.553 } 00:18:58.553 } 00:18:58.553 ]' 00:18:58.553 07:59:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:58.553 07:59:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:58.553 07:59:00 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:58.811 [2024-10-09 07:59:00.725185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.811 [2024-10-09 07:59:00.725250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:58.811 [2024-10-09 07:59:00.725271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:58.811 [2024-10-09 07:59:00.725287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.811 [2024-10-09 07:59:00.725348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:58.811 [2024-10-09 07:59:00.728697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.811 [2024-10-09 07:59:00.728732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:58.812 [2024-10-09 07:59:00.728753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:18:58.812 [2024-10-09 07:59:00.728765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.812 [2024-10-09 07:59:00.729402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.812 [2024-10-09 07:59:00.729435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:58.812 [2024-10-09 07:59:00.729457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:18:58.812 [2024-10-09 07:59:00.729469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.812 [2024-10-09 07:59:00.733200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.812 [2024-10-09 07:59:00.733231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:58.812 [2024-10-09 07:59:00.733248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:18:58.812 [2024-10-09 07:59:00.733260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.812 [2024-10-09 07:59:00.740795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.812 [2024-10-09 07:59:00.740828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:58.812 [2024-10-09 07:59:00.740849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.474 ms 00:18:58.812 [2024-10-09 07:59:00.740862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.812 [2024-10-09 07:59:00.772553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.812 [2024-10-09 07:59:00.772612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:58.812 [2024-10-09 07:59:00.772639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.572 ms 00:18:58.812 [2024-10-09 07:59:00.772651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.812 [2024-10-09 07:59:00.791570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.812 [2024-10-09 07:59:00.791639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:58.812 [2024-10-09 07:59:00.791662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.794 ms 00:18:58.812 [2024-10-09 07:59:00.791676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.812 [2024-10-09 07:59:00.791946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.812 [2024-10-09 07:59:00.791969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:58.812 [2024-10-09 07:59:00.791986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:18:58.812 [2024-10-09 07:59:00.791998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.071 [2024-10-09 07:59:00.823718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.071 [2024-10-09 07:59:00.823777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:59.071 [2024-10-09 07:59:00.823800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.673 ms 00:18:59.071 [2024-10-09 07:59:00.823812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.071 [2024-10-09 07:59:00.855239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.071 [2024-10-09 07:59:00.855298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:59.071 [2024-10-09 07:59:00.855324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.292 ms 00:18:59.071 [2024-10-09 07:59:00.855359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.071 [2024-10-09 07:59:00.886345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.071 [2024-10-09 07:59:00.886413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:59.071 [2024-10-09 07:59:00.886436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.853 ms 00:18:59.071 [2024-10-09 07:59:00.886449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.071 [2024-10-09 07:59:00.917363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.071 [2024-10-09 07:59:00.917412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:59.071 [2024-10-09 07:59:00.917434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.740 ms 00:18:59.071 [2024-10-09 07:59:00.917446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.071 [2024-10-09 07:59:00.917552] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:59.071 [2024-10-09 07:59:00.917593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:59.071 [2024-10-09 07:59:00.917614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:59.071 [2024-10-09 07:59:00.917627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:59.071 [2024-10-09 07:59:00.917641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.917991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:59.072 [2024-10-09 07:59:00.918961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:59.073 [2024-10-09 07:59:00.918976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:59.073 [2024-10-09 07:59:00.919000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:59.073 [2024-10-09 07:59:00.919021] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a716a08-2588-4711-9bfe-c66b02b59b71 00:18:59.073 [2024-10-09 07:59:00.919039] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:59.073 [2024-10-09 07:59:00.919053] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:59.073 [2024-10-09 07:59:00.919064] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:59.073 [2024-10-09 07:59:00.919078] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:59.073 [2024-10-09 07:59:00.919089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:59.073 [2024-10-09 07:59:00.919103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:59.073 [2024-10-09 07:59:00.919114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:59.073 [2024-10-09 07:59:00.919127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:59.073 [2024-10-09 07:59:00.919137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:59.073 [2024-10-09 07:59:00.919152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.073 [2024-10-09 07:59:00.919164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:59.073 [2024-10-09 07:59:00.919178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:18:59.073 [2024-10-09 07:59:00.919190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.073 [2024-10-09 07:59:00.935978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.073 [2024-10-09 07:59:00.936021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:59.073 [2024-10-09 07:59:00.936044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.736 ms 00:18:59.073 [2024-10-09 07:59:00.936057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.073 [2024-10-09 07:59:00.936573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.073 [2024-10-09 07:59:00.936603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:59.073 [2024-10-09 07:59:00.936625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:18:59.073 [2024-10-09 07:59:00.936637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.073 [2024-10-09 07:59:00.995108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.073 [2024-10-09 07:59:00.995181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.073 [2024-10-09 07:59:00.995203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.073 [2024-10-09 07:59:00.995216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.073 [2024-10-09 07:59:00.995413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.073 [2024-10-09 07:59:00.995434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.073 [2024-10-09 07:59:00.995455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.073 [2024-10-09 07:59:00.995467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.073 [2024-10-09 07:59:00.995570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.073 [2024-10-09 07:59:00.995602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.073 [2024-10-09 07:59:00.995622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.073 [2024-10-09 07:59:00.995634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.073 [2024-10-09 07:59:00.995675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.073 [2024-10-09 07:59:00.995701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.073 [2024-10-09 07:59:00.995736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.073 [2024-10-09 07:59:00.995748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.331 [2024-10-09 07:59:01.106205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.331 [2024-10-09 07:59:01.106271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.331 [2024-10-09 07:59:01.106293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.331 [2024-10-09 07:59:01.106306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.331 [2024-10-09 07:59:01.191423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.331 [2024-10-09 07:59:01.191491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.331 [2024-10-09 07:59:01.191514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.331 [2024-10-09 07:59:01.191530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.331 [2024-10-09 07:59:01.191672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.331 [2024-10-09 07:59:01.191693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.331 [2024-10-09 07:59:01.191711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.331 [2024-10-09 07:59:01.191724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.331 [2024-10-09 07:59:01.191790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.331 [2024-10-09 07:59:01.191805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.331 [2024-10-09 07:59:01.191843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.331 [2024-10-09 07:59:01.191856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.331 [2024-10-09 07:59:01.192011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.332 [2024-10-09 07:59:01.192032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.332 [2024-10-09 07:59:01.192047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.332 [2024-10-09 07:59:01.192059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.332 [2024-10-09 07:59:01.192142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.332 [2024-10-09 07:59:01.192162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:59.332 [2024-10-09 07:59:01.192177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.332 [2024-10-09 07:59:01.192189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.332 [2024-10-09 07:59:01.192259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.332 [2024-10-09 07:59:01.192275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.332 [2024-10-09 07:59:01.192293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.332 [2024-10-09 07:59:01.192305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.332 [2024-10-09 07:59:01.192395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.332 [2024-10-09 07:59:01.192415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.332 [2024-10-09 07:59:01.192455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.332 [2024-10-09 07:59:01.192468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.332 [2024-10-09 07:59:01.192701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 467.504 ms, result 0 00:18:59.332 true 00:18:59.332 07:59:01 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76015 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76015 ']' 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76015 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76015 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.332 killing process with pid 76015 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76015' 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76015 00:18:59.332 07:59:01 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76015 00:19:04.598 07:59:05 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:05.166 65536+0 records in 00:19:05.166 65536+0 records out 00:19:05.166 268435456 bytes (268 MB, 256 MiB) copied, 1.20401 s, 223 MB/s 00:19:05.166 07:59:07 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:05.424 [2024-10-09 07:59:07.216550] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:19:05.424 [2024-10-09 07:59:07.216728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76226 ] 00:19:05.424 [2024-10-09 07:59:07.389825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.683 [2024-10-09 07:59:07.581461] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.941 [2024-10-09 07:59:07.907327] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:05.941 [2024-10-09 07:59:07.907446] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:06.201 [2024-10-09 07:59:08.072220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.072296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:06.201 [2024-10-09 07:59:08.072321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:06.201 [2024-10-09 07:59:08.072350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.075820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.075868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.201 [2024-10-09 07:59:08.075886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.438 ms 00:19:06.201 [2024-10-09 07:59:08.075897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.076175] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:06.201 [2024-10-09 07:59:08.077144] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:06.201 [2024-10-09 07:59:08.077186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.077201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.201 [2024-10-09 07:59:08.077219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:19:06.201 [2024-10-09 07:59:08.077230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.078581] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:06.201 [2024-10-09 07:59:08.095502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.095573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:06.201 [2024-10-09 07:59:08.095603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.921 ms 00:19:06.201 [2024-10-09 07:59:08.095617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.095791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.095814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:06.201 [2024-10-09 07:59:08.095834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:06.201 [2024-10-09 07:59:08.095846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.100306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.100373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.201 [2024-10-09 07:59:08.100391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:19:06.201 [2024-10-09 07:59:08.100403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.100563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.100595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.201 [2024-10-09 07:59:08.100609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:06.201 [2024-10-09 07:59:08.100621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.100663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.100680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:06.201 [2024-10-09 07:59:08.100693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:06.201 [2024-10-09 07:59:08.100704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.201 [2024-10-09 07:59:08.100737] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:06.201 [2024-10-09 07:59:08.105101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.201 [2024-10-09 07:59:08.105143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.202 [2024-10-09 07:59:08.105159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.374 ms 00:19:06.202 [2024-10-09 07:59:08.105170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.202 [2024-10-09 07:59:08.105245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.202 [2024-10-09 07:59:08.105269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:06.202 [2024-10-09 07:59:08.105282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:06.202 [2024-10-09 07:59:08.105293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.202 [2024-10-09 07:59:08.105327] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:06.202 [2024-10-09 07:59:08.105373] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:06.202 [2024-10-09 07:59:08.105417] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:06.202 [2024-10-09 07:59:08.105437] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:06.202 [2024-10-09 07:59:08.105554] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:06.202 [2024-10-09 07:59:08.105584] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:06.202 [2024-10-09 07:59:08.105601] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:06.202 [2024-10-09 07:59:08.105616] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:06.202 [2024-10-09 07:59:08.105629] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:06.202 [2024-10-09 07:59:08.105641] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:06.202 [2024-10-09 07:59:08.105652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:06.202 [2024-10-09 07:59:08.105663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:06.202 [2024-10-09 07:59:08.105673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:06.202 [2024-10-09 07:59:08.105685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.202 [2024-10-09 07:59:08.105696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:06.202 [2024-10-09 07:59:08.105712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:19:06.202 [2024-10-09 07:59:08.105724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.202 [2024-10-09 07:59:08.105853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.202 [2024-10-09 07:59:08.105870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:06.202 [2024-10-09 07:59:08.105882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:06.202 [2024-10-09 07:59:08.105892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.202 [2024-10-09 07:59:08.106006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:06.202 [2024-10-09 07:59:08.106024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:06.202 [2024-10-09 07:59:08.106036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:06.202 [2024-10-09 07:59:08.106075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:06.202 [2024-10-09 07:59:08.106106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.202 [2024-10-09 07:59:08.106126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:06.202 [2024-10-09 07:59:08.106151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:06.202 [2024-10-09 07:59:08.106168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.202 [2024-10-09 07:59:08.106187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:06.202 [2024-10-09 07:59:08.106206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:06.202 [2024-10-09 07:59:08.106226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:06.202 [2024-10-09 07:59:08.106265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:06.202 [2024-10-09 07:59:08.106309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:06.202 [2024-10-09 07:59:08.106367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:06.202 [2024-10-09 07:59:08.106399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:06.202 [2024-10-09 07:59:08.106429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:06.202 [2024-10-09 07:59:08.106459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.202 [2024-10-09 07:59:08.106479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:06.202 [2024-10-09 07:59:08.106489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:06.202 [2024-10-09 07:59:08.106499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.202 [2024-10-09 07:59:08.106509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:06.202 [2024-10-09 07:59:08.106519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:06.202 [2024-10-09 07:59:08.106529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:06.202 [2024-10-09 07:59:08.106549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:06.202 [2024-10-09 07:59:08.106558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106569] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:06.202 [2024-10-09 07:59:08.106580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:06.202 [2024-10-09 07:59:08.106591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.202 [2024-10-09 07:59:08.106613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:06.202 [2024-10-09 07:59:08.106623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:06.202 [2024-10-09 07:59:08.106633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:06.202 [2024-10-09 07:59:08.106645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:06.202 [2024-10-09 07:59:08.106655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:06.202 [2024-10-09 07:59:08.106665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:06.202 [2024-10-09 07:59:08.106678] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:06.202 [2024-10-09 07:59:08.106691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.202 [2024-10-09 07:59:08.106710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:06.202 [2024-10-09 07:59:08.106721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:06.202 [2024-10-09 07:59:08.106733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:06.202 [2024-10-09 07:59:08.106744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:06.202 [2024-10-09 07:59:08.106755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:06.202 [2024-10-09 07:59:08.106766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:06.203 [2024-10-09 07:59:08.106777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:06.203 [2024-10-09 07:59:08.106787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:06.203 [2024-10-09 07:59:08.106799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:06.203 [2024-10-09 07:59:08.106810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:06.203 [2024-10-09 07:59:08.106821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:06.203 [2024-10-09 07:59:08.106832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:06.203 [2024-10-09 07:59:08.106843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:06.203 [2024-10-09 07:59:08.106855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:06.203 [2024-10-09 07:59:08.106866] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:06.203 [2024-10-09 07:59:08.106878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.203 [2024-10-09 07:59:08.106890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:06.203 [2024-10-09 07:59:08.106901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:06.203 [2024-10-09 07:59:08.106912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:06.203 [2024-10-09 07:59:08.106924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:06.203 [2024-10-09 07:59:08.106937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.106953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:06.203 [2024-10-09 07:59:08.106966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:19:06.203 [2024-10-09 07:59:08.106977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.203 [2024-10-09 07:59:08.147866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.147938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.203 [2024-10-09 07:59:08.147959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.811 ms 00:19:06.203 [2024-10-09 07:59:08.147972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.203 [2024-10-09 07:59:08.148204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.148258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:06.203 [2024-10-09 07:59:08.148280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:06.203 [2024-10-09 07:59:08.148293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.203 [2024-10-09 07:59:08.189564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.189633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.203 [2024-10-09 07:59:08.189653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.231 ms 00:19:06.203 [2024-10-09 07:59:08.189666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.203 [2024-10-09 07:59:08.189831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.189851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.203 [2024-10-09 07:59:08.189865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:06.203 [2024-10-09 07:59:08.189876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.203 [2024-10-09 07:59:08.190200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.190230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.203 [2024-10-09 07:59:08.190244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:06.203 [2024-10-09 07:59:08.190256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.203 [2024-10-09 07:59:08.190432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.190452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.203 [2024-10-09 07:59:08.190466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:06.203 [2024-10-09 07:59:08.190477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.203 [2024-10-09 07:59:08.207138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.203 [2024-10-09 07:59:08.207198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.203 [2024-10-09 07:59:08.207217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.628 ms 00:19:06.203 [2024-10-09 07:59:08.207230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.224050] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:06.462 [2024-10-09 07:59:08.224109] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:06.462 [2024-10-09 07:59:08.224135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.224148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:06.462 [2024-10-09 07:59:08.224162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.688 ms 00:19:06.462 [2024-10-09 07:59:08.224173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.255960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.256031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:06.462 [2024-10-09 07:59:08.256051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.552 ms 00:19:06.462 [2024-10-09 07:59:08.256073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.272882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.272983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:06.462 [2024-10-09 07:59:08.273002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.619 ms 00:19:06.462 [2024-10-09 07:59:08.273016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.289505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.289580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:06.462 [2024-10-09 07:59:08.289600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.319 ms 00:19:06.462 [2024-10-09 07:59:08.289611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.290677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.290717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:06.462 [2024-10-09 07:59:08.290734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:19:06.462 [2024-10-09 07:59:08.290745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.368320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.368424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:06.462 [2024-10-09 07:59:08.368447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.536 ms 00:19:06.462 [2024-10-09 07:59:08.368459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.381684] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:06.462 [2024-10-09 07:59:08.395638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.395711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:06.462 [2024-10-09 07:59:08.395729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.005 ms 00:19:06.462 [2024-10-09 07:59:08.395741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.395902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.395923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:06.462 [2024-10-09 07:59:08.395937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:06.462 [2024-10-09 07:59:08.395949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.396019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.396043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:06.462 [2024-10-09 07:59:08.396059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:06.462 [2024-10-09 07:59:08.396070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.396111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.396126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:06.462 [2024-10-09 07:59:08.396139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:06.462 [2024-10-09 07:59:08.396150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.396189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:06.462 [2024-10-09 07:59:08.396205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.396216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:06.462 [2024-10-09 07:59:08.396229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:06.462 [2024-10-09 07:59:08.396243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.427973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.428025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:06.462 [2024-10-09 07:59:08.428044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.696 ms 00:19:06.462 [2024-10-09 07:59:08.428055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.428203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.462 [2024-10-09 07:59:08.428225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:06.462 [2024-10-09 07:59:08.428243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:06.462 [2024-10-09 07:59:08.428254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.462 [2024-10-09 07:59:08.429362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:06.462 [2024-10-09 07:59:08.433562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.754 ms, result 0 00:19:06.462 [2024-10-09 07:59:08.434310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:06.462 [2024-10-09 07:59:08.451240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:07.844  [2024-10-09T07:59:10.791Z] Copying: 26/256 [MB] (26 MBps) [2024-10-09T07:59:11.725Z] Copying: 52/256 [MB] (26 MBps) [2024-10-09T07:59:12.660Z] Copying: 78/256 [MB] (25 MBps) [2024-10-09T07:59:13.595Z] Copying: 102/256 [MB] (24 MBps) [2024-10-09T07:59:14.529Z] Copying: 127/256 [MB] (25 MBps) [2024-10-09T07:59:15.464Z] Copying: 153/256 [MB] (25 MBps) [2024-10-09T07:59:16.838Z] Copying: 179/256 [MB] (25 MBps) [2024-10-09T07:59:17.805Z] Copying: 206/256 [MB] (27 MBps) [2024-10-09T07:59:18.372Z] Copying: 233/256 [MB] (27 MBps) [2024-10-09T07:59:18.372Z] Copying: 256/256 [MB] (average 26 MBps)[2024-10-09 07:59:18.262503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.360 [2024-10-09 07:59:18.274893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.360 [2024-10-09 07:59:18.274950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.360 [2024-10-09 07:59:18.274969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:16.360 [2024-10-09 07:59:18.274981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.360 [2024-10-09 07:59:18.275013] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:16.360 [2024-10-09 07:59:18.278366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.360 [2024-10-09 07:59:18.278399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.360 [2024-10-09 07:59:18.278415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.332 ms 00:19:16.360 [2024-10-09 07:59:18.278426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.360 [2024-10-09 07:59:18.279997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.360 [2024-10-09 07:59:18.280040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.360 [2024-10-09 07:59:18.280057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:19:16.360 [2024-10-09 07:59:18.280077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.360 [2024-10-09 07:59:18.287055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.360 [2024-10-09 07:59:18.287096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.360 [2024-10-09 07:59:18.287111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.955 ms 00:19:16.360 [2024-10-09 07:59:18.287123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.360 [2024-10-09 07:59:18.294660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.360 [2024-10-09 07:59:18.294696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.360 [2024-10-09 07:59:18.294711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.475 ms 00:19:16.360 [2024-10-09 07:59:18.294729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.360 [2024-10-09 07:59:18.325729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.360 [2024-10-09 07:59:18.325782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.360 [2024-10-09 07:59:18.325801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.943 ms 00:19:16.360 [2024-10-09 07:59:18.325813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.360 [2024-10-09 07:59:18.343930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.361 [2024-10-09 07:59:18.343989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.361 [2024-10-09 07:59:18.344008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.045 ms 00:19:16.361 [2024-10-09 07:59:18.344021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.361 [2024-10-09 07:59:18.344193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.361 [2024-10-09 07:59:18.344213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.361 [2024-10-09 07:59:18.344226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:16.361 [2024-10-09 07:59:18.344238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.620 [2024-10-09 07:59:18.375899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.620 [2024-10-09 07:59:18.375979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:16.620 [2024-10-09 07:59:18.375998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.636 ms 00:19:16.620 [2024-10-09 07:59:18.376010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.620 [2024-10-09 07:59:18.407769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.620 [2024-10-09 07:59:18.407835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:16.620 [2024-10-09 07:59:18.407854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.680 ms 00:19:16.620 [2024-10-09 07:59:18.407865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.620 [2024-10-09 07:59:18.438667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.620 [2024-10-09 07:59:18.438723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:16.620 [2024-10-09 07:59:18.438741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.702 ms 00:19:16.620 [2024-10-09 07:59:18.438753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.620 [2024-10-09 07:59:18.470629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.620 [2024-10-09 07:59:18.470705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:16.620 [2024-10-09 07:59:18.470724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.764 ms 00:19:16.620 [2024-10-09 07:59:18.470736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.620 [2024-10-09 07:59:18.470852] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:16.620 [2024-10-09 07:59:18.470879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.470997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:16.620 [2024-10-09 07:59:18.471321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.471984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:16.621 [2024-10-09 07:59:18.472119] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:16.621 [2024-10-09 07:59:18.472130] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a716a08-2588-4711-9bfe-c66b02b59b71 00:19:16.621 [2024-10-09 07:59:18.472142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:16.621 [2024-10-09 07:59:18.472153] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:16.621 [2024-10-09 07:59:18.472164] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:16.621 [2024-10-09 07:59:18.472175] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:16.621 [2024-10-09 07:59:18.472192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:16.621 [2024-10-09 07:59:18.472203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:16.621 [2024-10-09 07:59:18.472214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:16.621 [2024-10-09 07:59:18.472224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:16.621 [2024-10-09 07:59:18.472234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:16.621 [2024-10-09 07:59:18.472245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.621 [2024-10-09 07:59:18.472257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:16.621 [2024-10-09 07:59:18.472269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:19:16.621 [2024-10-09 07:59:18.472280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.621 [2024-10-09 07:59:18.489075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.621 [2024-10-09 07:59:18.489116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:16.621 [2024-10-09 07:59:18.489139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.768 ms 00:19:16.621 [2024-10-09 07:59:18.489151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.621 [2024-10-09 07:59:18.489630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.621 [2024-10-09 07:59:18.489661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:16.621 [2024-10-09 07:59:18.489676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:19:16.621 [2024-10-09 07:59:18.489687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.621 [2024-10-09 07:59:18.529812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.621 [2024-10-09 07:59:18.529883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.621 [2024-10-09 07:59:18.529901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.621 [2024-10-09 07:59:18.529912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.621 [2024-10-09 07:59:18.530047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.621 [2024-10-09 07:59:18.530066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.621 [2024-10-09 07:59:18.530078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.621 [2024-10-09 07:59:18.530089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.621 [2024-10-09 07:59:18.530154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.621 [2024-10-09 07:59:18.530173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.621 [2024-10-09 07:59:18.530192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.621 [2024-10-09 07:59:18.530204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.621 [2024-10-09 07:59:18.530228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.621 [2024-10-09 07:59:18.530242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.621 [2024-10-09 07:59:18.530254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.621 [2024-10-09 07:59:18.530264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.634320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.634401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.880 [2024-10-09 07:59:18.634427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.634439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.719036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.719107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.880 [2024-10-09 07:59:18.719126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.719138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.719222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.719240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.880 [2024-10-09 07:59:18.719252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.719263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.719308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.719322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.880 [2024-10-09 07:59:18.719370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.719383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.719513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.719533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.880 [2024-10-09 07:59:18.719546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.719558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.719626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.719670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:16.880 [2024-10-09 07:59:18.719683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.719694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.719743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.719760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.880 [2024-10-09 07:59:18.719772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.719782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.719842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.880 [2024-10-09 07:59:18.719865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.880 [2024-10-09 07:59:18.719879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.880 [2024-10-09 07:59:18.719890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.880 [2024-10-09 07:59:18.720060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 445.167 ms, result 0 00:19:18.256 00:19:18.256 00:19:18.256 07:59:19 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76358 00:19:18.256 07:59:19 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76358 00:19:18.256 07:59:19 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:18.256 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76358 ']' 00:19:18.256 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.256 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.256 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.256 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.256 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:18.256 [2024-10-09 07:59:20.104219] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:19:18.256 [2024-10-09 07:59:20.104408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76358 ] 00:19:18.530 [2024-10-09 07:59:20.275442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.530 [2024-10-09 07:59:20.462175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.477 07:59:21 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.477 07:59:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:19.477 07:59:21 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:19.736 [2024-10-09 07:59:21.530282] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.736 [2024-10-09 07:59:21.530379] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.736 [2024-10-09 07:59:21.694354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.736 [2024-10-09 07:59:21.694429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:19.736 [2024-10-09 07:59:21.694457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:19.736 [2024-10-09 07:59:21.694472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.736 [2024-10-09 07:59:21.698583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.736 [2024-10-09 07:59:21.698632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.736 [2024-10-09 07:59:21.698653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:19:19.736 [2024-10-09 07:59:21.698667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.736 [2024-10-09 07:59:21.698896] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:19.736 [2024-10-09 07:59:21.699872] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:19.736 [2024-10-09 07:59:21.699923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.736 [2024-10-09 07:59:21.699938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.736 [2024-10-09 07:59:21.699953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:19:19.736 [2024-10-09 07:59:21.699964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.736 [2024-10-09 07:59:21.701312] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:19.736 [2024-10-09 07:59:21.717985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.736 [2024-10-09 07:59:21.718040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:19.736 [2024-10-09 07:59:21.718060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.679 ms 00:19:19.736 [2024-10-09 07:59:21.718078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.736 [2024-10-09 07:59:21.718256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.736 [2024-10-09 07:59:21.718293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:19.736 [2024-10-09 07:59:21.718309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:19.736 [2024-10-09 07:59:21.718326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.736 [2024-10-09 07:59:21.722857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.736 [2024-10-09 07:59:21.722923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.737 [2024-10-09 07:59:21.722941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.426 ms 00:19:19.737 [2024-10-09 07:59:21.722959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.737 [2024-10-09 07:59:21.723151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.737 [2024-10-09 07:59:21.723181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.737 [2024-10-09 07:59:21.723196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:19.737 [2024-10-09 07:59:21.723214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.737 [2024-10-09 07:59:21.723254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.737 [2024-10-09 07:59:21.723285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:19.737 [2024-10-09 07:59:21.723299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:19.737 [2024-10-09 07:59:21.723316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.737 [2024-10-09 07:59:21.723371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:19.737 [2024-10-09 07:59:21.727652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.737 [2024-10-09 07:59:21.727692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.737 [2024-10-09 07:59:21.727715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:19:19.737 [2024-10-09 07:59:21.727734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.737 [2024-10-09 07:59:21.727812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.737 [2024-10-09 07:59:21.727831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:19.737 [2024-10-09 07:59:21.727851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:19.737 [2024-10-09 07:59:21.727863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.737 [2024-10-09 07:59:21.727900] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:19.737 [2024-10-09 07:59:21.727931] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:19.737 [2024-10-09 07:59:21.727994] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:19.737 [2024-10-09 07:59:21.728040] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:19.737 [2024-10-09 07:59:21.728166] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:19.737 [2024-10-09 07:59:21.728185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:19.737 [2024-10-09 07:59:21.728209] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:19.737 [2024-10-09 07:59:21.728227] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:19.737 [2024-10-09 07:59:21.728247] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:19.737 [2024-10-09 07:59:21.728260] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:19.737 [2024-10-09 07:59:21.728279] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:19.737 [2024-10-09 07:59:21.728292] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:19.737 [2024-10-09 07:59:21.728312] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:19.737 [2024-10-09 07:59:21.728352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.737 [2024-10-09 07:59:21.728375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:19.737 [2024-10-09 07:59:21.728391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:19:19.737 [2024-10-09 07:59:21.728407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.737 [2024-10-09 07:59:21.728534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.737 [2024-10-09 07:59:21.728569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:19.737 [2024-10-09 07:59:21.728584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:19.737 [2024-10-09 07:59:21.728602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.737 [2024-10-09 07:59:21.728720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:19.737 [2024-10-09 07:59:21.728753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:19.737 [2024-10-09 07:59:21.728767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.737 [2024-10-09 07:59:21.728785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.737 [2024-10-09 07:59:21.728799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:19.737 [2024-10-09 07:59:21.728816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:19.737 [2024-10-09 07:59:21.728828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:19.737 [2024-10-09 07:59:21.728852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:19.737 [2024-10-09 07:59:21.728865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:19.737 [2024-10-09 07:59:21.728881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.737 [2024-10-09 07:59:21.728894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:19.737 [2024-10-09 07:59:21.728911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:19.737 [2024-10-09 07:59:21.728923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.737 [2024-10-09 07:59:21.728940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:19.737 [2024-10-09 07:59:21.728952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:19.737 [2024-10-09 07:59:21.728969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.737 [2024-10-09 07:59:21.728982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:19.737 [2024-10-09 07:59:21.728999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:19.737 [2024-10-09 07:59:21.729026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:19.737 [2024-10-09 07:59:21.729060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.737 [2024-10-09 07:59:21.729089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:19.737 [2024-10-09 07:59:21.729110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.737 [2024-10-09 07:59:21.729137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:19.737 [2024-10-09 07:59:21.729150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.737 [2024-10-09 07:59:21.729178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:19.737 [2024-10-09 07:59:21.729194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.737 [2024-10-09 07:59:21.729222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:19.737 [2024-10-09 07:59:21.729234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.737 [2024-10-09 07:59:21.729265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:19.737 [2024-10-09 07:59:21.729282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:19.737 [2024-10-09 07:59:21.729294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.737 [2024-10-09 07:59:21.729310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:19.737 [2024-10-09 07:59:21.729322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:19.737 [2024-10-09 07:59:21.729365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:19.737 [2024-10-09 07:59:21.729398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:19.737 [2024-10-09 07:59:21.729410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729426] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:19.737 [2024-10-09 07:59:21.729439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:19.737 [2024-10-09 07:59:21.729456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.737 [2024-10-09 07:59:21.729468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.737 [2024-10-09 07:59:21.729486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:19.737 [2024-10-09 07:59:21.729498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:19.737 [2024-10-09 07:59:21.729514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:19.737 [2024-10-09 07:59:21.729527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:19.737 [2024-10-09 07:59:21.729543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:19.737 [2024-10-09 07:59:21.729556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:19.737 [2024-10-09 07:59:21.729574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:19.737 [2024-10-09 07:59:21.729589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.737 [2024-10-09 07:59:21.729612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:19.737 [2024-10-09 07:59:21.729625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:19.737 [2024-10-09 07:59:21.729641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:19.737 [2024-10-09 07:59:21.729654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:19.737 [2024-10-09 07:59:21.729672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:19.737 [2024-10-09 07:59:21.729685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:19.737 [2024-10-09 07:59:21.729702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:19.737 [2024-10-09 07:59:21.729715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:19.737 [2024-10-09 07:59:21.729732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:19.737 [2024-10-09 07:59:21.729745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:19.737 [2024-10-09 07:59:21.729761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:19.737 [2024-10-09 07:59:21.729774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:19.738 [2024-10-09 07:59:21.729790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:19.738 [2024-10-09 07:59:21.729805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:19.738 [2024-10-09 07:59:21.729821] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:19.738 [2024-10-09 07:59:21.729836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.738 [2024-10-09 07:59:21.729867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:19.738 [2024-10-09 07:59:21.729880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:19.738 [2024-10-09 07:59:21.729897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:19.738 [2024-10-09 07:59:21.729910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:19.738 [2024-10-09 07:59:21.729928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.738 [2024-10-09 07:59:21.729941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:19.738 [2024-10-09 07:59:21.729958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:19:19.738 [2024-10-09 07:59:21.729971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.764574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.764636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.996 [2024-10-09 07:59:21.764660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.517 ms 00:19:19.996 [2024-10-09 07:59:21.764673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.764863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.764884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:19.996 [2024-10-09 07:59:21.764900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:19.996 [2024-10-09 07:59:21.764912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.813953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.814013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.996 [2024-10-09 07:59:21.814042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.994 ms 00:19:19.996 [2024-10-09 07:59:21.814057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.814234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.814265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.996 [2024-10-09 07:59:21.814287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:19.996 [2024-10-09 07:59:21.814306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.814665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.814695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.996 [2024-10-09 07:59:21.814717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:19.996 [2024-10-09 07:59:21.814730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.814900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.814927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.996 [2024-10-09 07:59:21.814948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:19.996 [2024-10-09 07:59:21.814961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.833778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.833838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.996 [2024-10-09 07:59:21.833865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.769 ms 00:19:19.996 [2024-10-09 07:59:21.833885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.850648] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:19.996 [2024-10-09 07:59:21.850695] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:19.996 [2024-10-09 07:59:21.850722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.850736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:19.996 [2024-10-09 07:59:21.850755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.666 ms 00:19:19.996 [2024-10-09 07:59:21.850768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.880758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.880811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:19.996 [2024-10-09 07:59:21.880836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.882 ms 00:19:19.996 [2024-10-09 07:59:21.880867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.896793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.896861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:19.996 [2024-10-09 07:59:21.896894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.782 ms 00:19:19.996 [2024-10-09 07:59:21.896909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.912533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.912579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:19.996 [2024-10-09 07:59:21.912603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.488 ms 00:19:19.996 [2024-10-09 07:59:21.912617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.913550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.913589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:19.996 [2024-10-09 07:59:21.913612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:19:19.996 [2024-10-09 07:59:21.913625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:21.989668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.996 [2024-10-09 07:59:21.989732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:19.996 [2024-10-09 07:59:21.989762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.990 ms 00:19:19.996 [2024-10-09 07:59:21.989782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.996 [2024-10-09 07:59:22.005228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.255 [2024-10-09 07:59:22.019556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.255 [2024-10-09 07:59:22.019664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.255 [2024-10-09 07:59:22.019687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.620 ms 00:19:20.255 [2024-10-09 07:59:22.019706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.255 [2024-10-09 07:59:22.019857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.255 [2024-10-09 07:59:22.019886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.255 [2024-10-09 07:59:22.019903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:20.255 [2024-10-09 07:59:22.019920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.255 [2024-10-09 07:59:22.019994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.255 [2024-10-09 07:59:22.020036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.255 [2024-10-09 07:59:22.020051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:20.255 [2024-10-09 07:59:22.020070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.255 [2024-10-09 07:59:22.020105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.255 [2024-10-09 07:59:22.020138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.255 [2024-10-09 07:59:22.020152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.255 [2024-10-09 07:59:22.020180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.255 [2024-10-09 07:59:22.020232] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.255 [2024-10-09 07:59:22.020267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.255 [2024-10-09 07:59:22.020280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.255 [2024-10-09 07:59:22.020298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:20.255 [2024-10-09 07:59:22.020311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.255 [2024-10-09 07:59:22.051661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.255 [2024-10-09 07:59:22.051707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.255 [2024-10-09 07:59:22.051729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.288 ms 00:19:20.255 [2024-10-09 07:59:22.051742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.255 [2024-10-09 07:59:22.051883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.255 [2024-10-09 07:59:22.051916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.255 [2024-10-09 07:59:22.051934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:20.255 [2024-10-09 07:59:22.051946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.255 [2024-10-09 07:59:22.053009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.255 [2024-10-09 07:59:22.058587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 358.302 ms, result 0 00:19:20.255 [2024-10-09 07:59:22.059972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.255 Some configs were skipped because the RPC state that can call them passed over. 00:19:20.255 07:59:22 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:20.514 [2024-10-09 07:59:22.378055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.514 [2024-10-09 07:59:22.378146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:20.514 [2024-10-09 07:59:22.378175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:19:20.514 [2024-10-09 07:59:22.378194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.514 [2024-10-09 07:59:22.378249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.573 ms, result 0 00:19:20.514 true 00:19:20.514 07:59:22 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:20.772 [2024-10-09 07:59:22.650129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.772 [2024-10-09 07:59:22.650194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:20.772 [2024-10-09 07:59:22.650223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:19:20.772 [2024-10-09 07:59:22.650238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.772 [2024-10-09 07:59:22.650302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.261 ms, result 0 00:19:20.772 true 00:19:20.772 07:59:22 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76358 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76358 ']' 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76358 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76358 00:19:20.772 killing process with pid 76358 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76358' 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76358 00:19:20.772 07:59:22 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76358 00:19:21.709 [2024-10-09 07:59:23.649888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.649969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:21.709 [2024-10-09 07:59:23.649991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:21.709 [2024-10-09 07:59:23.650005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.650038] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:21.709 [2024-10-09 07:59:23.653391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.653427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:21.709 [2024-10-09 07:59:23.653448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:19:21.709 [2024-10-09 07:59:23.653461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.653766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.653795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:21.709 [2024-10-09 07:59:23.653813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:21.709 [2024-10-09 07:59:23.653828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.657919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.657962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:21.709 [2024-10-09 07:59:23.657982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.061 ms 00:19:21.709 [2024-10-09 07:59:23.657995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.665536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.665592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:21.709 [2024-10-09 07:59:23.665615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.489 ms 00:19:21.709 [2024-10-09 07:59:23.665632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.678219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.678263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:21.709 [2024-10-09 07:59:23.678286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.520 ms 00:19:21.709 [2024-10-09 07:59:23.678299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.686791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.686840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:21.709 [2024-10-09 07:59:23.686860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.429 ms 00:19:21.709 [2024-10-09 07:59:23.686885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.687049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.687070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:21.709 [2024-10-09 07:59:23.687086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:21.709 [2024-10-09 07:59:23.687100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.700216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.700259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:21.709 [2024-10-09 07:59:23.700284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.079 ms 00:19:21.709 [2024-10-09 07:59:23.700298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.709 [2024-10-09 07:59:23.712808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.709 [2024-10-09 07:59:23.712851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:21.709 [2024-10-09 07:59:23.712882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.440 ms 00:19:21.709 [2024-10-09 07:59:23.712896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.991 [2024-10-09 07:59:23.725078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.991 [2024-10-09 07:59:23.725124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:21.991 [2024-10-09 07:59:23.725147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.122 ms 00:19:21.991 [2024-10-09 07:59:23.725161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.991 [2024-10-09 07:59:23.737364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.991 [2024-10-09 07:59:23.737407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:21.991 [2024-10-09 07:59:23.737431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.110 ms 00:19:21.991 [2024-10-09 07:59:23.737445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.991 [2024-10-09 07:59:23.737498] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:21.991 [2024-10-09 07:59:23.737530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.737996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:21.991 [2024-10-09 07:59:23.738451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.738997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:21.992 [2024-10-09 07:59:23.739179] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:21.992 [2024-10-09 07:59:23.739202] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a716a08-2588-4711-9bfe-c66b02b59b71 00:19:21.992 [2024-10-09 07:59:23.739217] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:21.992 [2024-10-09 07:59:23.739234] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:21.992 [2024-10-09 07:59:23.739246] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:21.992 [2024-10-09 07:59:23.739263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:21.992 [2024-10-09 07:59:23.739292] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:21.992 [2024-10-09 07:59:23.739312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:21.992 [2024-10-09 07:59:23.739343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:21.992 [2024-10-09 07:59:23.739363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:21.992 [2024-10-09 07:59:23.739375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:21.992 [2024-10-09 07:59:23.739392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.992 [2024-10-09 07:59:23.739405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:21.992 [2024-10-09 07:59:23.739424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.898 ms 00:19:21.992 [2024-10-09 07:59:23.739437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.992 [2024-10-09 07:59:23.756220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.992 [2024-10-09 07:59:23.756276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:21.992 [2024-10-09 07:59:23.756308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.725 ms 00:19:21.992 [2024-10-09 07:59:23.756322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.992 [2024-10-09 07:59:23.756885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.992 [2024-10-09 07:59:23.756932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:21.992 [2024-10-09 07:59:23.756956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:19:21.992 [2024-10-09 07:59:23.756970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.992 [2024-10-09 07:59:23.809964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.992 [2024-10-09 07:59:23.810038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.992 [2024-10-09 07:59:23.810065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.992 [2024-10-09 07:59:23.810084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.992 [2024-10-09 07:59:23.810234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.992 [2024-10-09 07:59:23.810255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.992 [2024-10-09 07:59:23.810274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.992 [2024-10-09 07:59:23.810287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.992 [2024-10-09 07:59:23.810383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.992 [2024-10-09 07:59:23.810406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.992 [2024-10-09 07:59:23.810428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.992 [2024-10-09 07:59:23.810442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.992 [2024-10-09 07:59:23.810475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.992 [2024-10-09 07:59:23.810490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.992 [2024-10-09 07:59:23.810504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.992 [2024-10-09 07:59:23.810515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.992 [2024-10-09 07:59:23.914733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.992 [2024-10-09 07:59:23.915000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.992 [2024-10-09 07:59:23.915044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.992 [2024-10-09 07:59:23.915060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.000354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.251 [2024-10-09 07:59:24.000410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.251 [2024-10-09 07:59:24.000437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.251 [2024-10-09 07:59:24.000451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.000569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.251 [2024-10-09 07:59:24.000589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.251 [2024-10-09 07:59:24.000613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.251 [2024-10-09 07:59:24.000627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.000670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.251 [2024-10-09 07:59:24.000692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.251 [2024-10-09 07:59:24.000710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.251 [2024-10-09 07:59:24.000723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.000859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.251 [2024-10-09 07:59:24.000880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.251 [2024-10-09 07:59:24.000900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.251 [2024-10-09 07:59:24.000913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.000977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.251 [2024-10-09 07:59:24.000997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.251 [2024-10-09 07:59:24.001024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.251 [2024-10-09 07:59:24.001037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.001094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.251 [2024-10-09 07:59:24.001111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.251 [2024-10-09 07:59:24.001134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.251 [2024-10-09 07:59:24.001148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.001213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.251 [2024-10-09 07:59:24.001236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.251 [2024-10-09 07:59:24.001254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.251 [2024-10-09 07:59:24.001267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.251 [2024-10-09 07:59:24.001482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.566 ms, result 0 00:19:23.187 07:59:25 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:23.187 07:59:25 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:23.187 [2024-10-09 07:59:25.180918] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:19:23.187 [2024-10-09 07:59:25.181098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ] 00:19:23.446 [2024-10-09 07:59:25.348605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.704 [2024-10-09 07:59:25.541690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.963 [2024-10-09 07:59:25.862685] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.963 [2024-10-09 07:59:25.862766] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.224 [2024-10-09 07:59:26.025037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.025108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.224 [2024-10-09 07:59:26.025149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:24.224 [2024-10-09 07:59:26.025177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.028570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.028616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.224 [2024-10-09 07:59:26.028649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:19:24.224 [2024-10-09 07:59:26.028660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.028804] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.224 [2024-10-09 07:59:26.029751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.224 [2024-10-09 07:59:26.029796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.029811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.224 [2024-10-09 07:59:26.029828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:19:24.224 [2024-10-09 07:59:26.029839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.031097] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:24.224 [2024-10-09 07:59:26.047793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.047840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:24.224 [2024-10-09 07:59:26.047859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.697 ms 00:19:24.224 [2024-10-09 07:59:26.047870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.047999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.048023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:24.224 [2024-10-09 07:59:26.048040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:24.224 [2024-10-09 07:59:26.048052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.052833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.053031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.224 [2024-10-09 07:59:26.053062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.721 ms 00:19:24.224 [2024-10-09 07:59:26.053075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.053241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.053264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.224 [2024-10-09 07:59:26.053277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:24.224 [2024-10-09 07:59:26.053289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.053347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.053365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.224 [2024-10-09 07:59:26.053378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:24.224 [2024-10-09 07:59:26.053390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.053427] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.224 [2024-10-09 07:59:26.057699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.057739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.224 [2024-10-09 07:59:26.057755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.281 ms 00:19:24.224 [2024-10-09 07:59:26.057767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.057842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.057861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.224 [2024-10-09 07:59:26.057873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:24.224 [2024-10-09 07:59:26.057884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.057921] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:24.224 [2024-10-09 07:59:26.057951] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:24.224 [2024-10-09 07:59:26.057995] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:24.224 [2024-10-09 07:59:26.058019] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:24.224 [2024-10-09 07:59:26.058134] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.224 [2024-10-09 07:59:26.058150] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.224 [2024-10-09 07:59:26.058164] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:24.224 [2024-10-09 07:59:26.058178] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.224 [2024-10-09 07:59:26.058192] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.224 [2024-10-09 07:59:26.058204] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.224 [2024-10-09 07:59:26.058214] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.224 [2024-10-09 07:59:26.058225] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.224 [2024-10-09 07:59:26.058236] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.224 [2024-10-09 07:59:26.058247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.058263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.224 [2024-10-09 07:59:26.058275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:19:24.224 [2024-10-09 07:59:26.058287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.058439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.224 [2024-10-09 07:59:26.058460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.224 [2024-10-09 07:59:26.058472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:24.224 [2024-10-09 07:59:26.058484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.224 [2024-10-09 07:59:26.058600] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.224 [2024-10-09 07:59:26.058616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.224 [2024-10-09 07:59:26.058642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.224 [2024-10-09 07:59:26.058653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.224 [2024-10-09 07:59:26.058664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.224 [2024-10-09 07:59:26.058674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.224 [2024-10-09 07:59:26.058685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.224 [2024-10-09 07:59:26.058695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.224 [2024-10-09 07:59:26.058705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.224 [2024-10-09 07:59:26.058715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.224 [2024-10-09 07:59:26.058725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.224 [2024-10-09 07:59:26.058749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.224 [2024-10-09 07:59:26.058759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.225 [2024-10-09 07:59:26.058770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.225 [2024-10-09 07:59:26.058780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:24.225 [2024-10-09 07:59:26.058790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.225 [2024-10-09 07:59:26.058800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.225 [2024-10-09 07:59:26.058810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:24.225 [2024-10-09 07:59:26.058820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.225 [2024-10-09 07:59:26.058830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.225 [2024-10-09 07:59:26.058841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.225 [2024-10-09 07:59:26.058851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.225 [2024-10-09 07:59:26.058862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.225 [2024-10-09 07:59:26.058872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.225 [2024-10-09 07:59:26.058882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.225 [2024-10-09 07:59:26.058892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.225 [2024-10-09 07:59:26.058902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.225 [2024-10-09 07:59:26.058912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.225 [2024-10-09 07:59:26.058922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.225 [2024-10-09 07:59:26.058932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:24.225 [2024-10-09 07:59:26.058941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.225 [2024-10-09 07:59:26.058951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.225 [2024-10-09 07:59:26.058961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:24.225 [2024-10-09 07:59:26.058971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.225 [2024-10-09 07:59:26.058981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.225 [2024-10-09 07:59:26.058991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:24.225 [2024-10-09 07:59:26.059001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.225 [2024-10-09 07:59:26.059012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.225 [2024-10-09 07:59:26.059022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:24.225 [2024-10-09 07:59:26.059032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.225 [2024-10-09 07:59:26.059042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.225 [2024-10-09 07:59:26.059052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:24.225 [2024-10-09 07:59:26.059062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.225 [2024-10-09 07:59:26.059072] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.225 [2024-10-09 07:59:26.059083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.225 [2024-10-09 07:59:26.059094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.225 [2024-10-09 07:59:26.059104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.225 [2024-10-09 07:59:26.059116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.225 [2024-10-09 07:59:26.059125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.225 [2024-10-09 07:59:26.059135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.225 [2024-10-09 07:59:26.059146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.225 [2024-10-09 07:59:26.059155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.225 [2024-10-09 07:59:26.059165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.225 [2024-10-09 07:59:26.059177] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.225 [2024-10-09 07:59:26.059206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.225 [2024-10-09 07:59:26.059219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.225 [2024-10-09 07:59:26.059231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:24.225 [2024-10-09 07:59:26.059243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.225 [2024-10-09 07:59:26.059254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:24.225 [2024-10-09 07:59:26.059265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:24.225 [2024-10-09 07:59:26.059276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:24.225 [2024-10-09 07:59:26.059287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:24.225 [2024-10-09 07:59:26.059299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:24.225 [2024-10-09 07:59:26.059309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:24.225 [2024-10-09 07:59:26.059321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:24.225 [2024-10-09 07:59:26.059347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:24.225 [2024-10-09 07:59:26.059361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:24.225 [2024-10-09 07:59:26.059372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:24.225 [2024-10-09 07:59:26.059383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:24.225 [2024-10-09 07:59:26.059394] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.225 [2024-10-09 07:59:26.059407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.225 [2024-10-09 07:59:26.059424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.225 [2024-10-09 07:59:26.059435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.225 [2024-10-09 07:59:26.059446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.225 [2024-10-09 07:59:26.059457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.225 [2024-10-09 07:59:26.059469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.059481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.225 [2024-10-09 07:59:26.059492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:19:24.225 [2024-10-09 07:59:26.059503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.100491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.100556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.225 [2024-10-09 07:59:26.100578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.913 ms 00:19:24.225 [2024-10-09 07:59:26.100590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.100798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.100821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.225 [2024-10-09 07:59:26.100835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:24.225 [2024-10-09 07:59:26.100847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.142066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.142411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.225 [2024-10-09 07:59:26.142451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.181 ms 00:19:24.225 [2024-10-09 07:59:26.142466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.142646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.142667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.225 [2024-10-09 07:59:26.142680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.225 [2024-10-09 07:59:26.142697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.143072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.143091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.225 [2024-10-09 07:59:26.143104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:19:24.225 [2024-10-09 07:59:26.143115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.143277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.143297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.225 [2024-10-09 07:59:26.143310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:24.225 [2024-10-09 07:59:26.143321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.160155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.160228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.225 [2024-10-09 07:59:26.160249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.777 ms 00:19:24.225 [2024-10-09 07:59:26.160267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.225 [2024-10-09 07:59:26.177499] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:24.225 [2024-10-09 07:59:26.177817] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:24.225 [2024-10-09 07:59:26.177848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.225 [2024-10-09 07:59:26.177861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:24.226 [2024-10-09 07:59:26.177876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.336 ms 00:19:24.226 [2024-10-09 07:59:26.177889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.226 [2024-10-09 07:59:26.208398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.226 [2024-10-09 07:59:26.208480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:24.226 [2024-10-09 07:59:26.208530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.331 ms 00:19:24.226 [2024-10-09 07:59:26.208542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.226 [2024-10-09 07:59:26.225538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.226 [2024-10-09 07:59:26.225628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:24.226 [2024-10-09 07:59:26.225648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.834 ms 00:19:24.226 [2024-10-09 07:59:26.225662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.484 [2024-10-09 07:59:26.241682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.484 [2024-10-09 07:59:26.241727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:24.484 [2024-10-09 07:59:26.241760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.851 ms 00:19:24.484 [2024-10-09 07:59:26.241771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.484 [2024-10-09 07:59:26.242610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.484 [2024-10-09 07:59:26.242785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:24.484 [2024-10-09 07:59:26.242813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:19:24.484 [2024-10-09 07:59:26.242825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.484 [2024-10-09 07:59:26.317470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.484 [2024-10-09 07:59:26.317543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:24.484 [2024-10-09 07:59:26.317565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.565 ms 00:19:24.484 [2024-10-09 07:59:26.317583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.484 [2024-10-09 07:59:26.330910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:24.484 [2024-10-09 07:59:26.345301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.484 [2024-10-09 07:59:26.345624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:24.484 [2024-10-09 07:59:26.345662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.545 ms 00:19:24.484 [2024-10-09 07:59:26.345675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.484 [2024-10-09 07:59:26.345852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.484 [2024-10-09 07:59:26.345873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:24.484 [2024-10-09 07:59:26.345887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:24.484 [2024-10-09 07:59:26.345899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.484 [2024-10-09 07:59:26.345978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.484 [2024-10-09 07:59:26.345995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:24.484 [2024-10-09 07:59:26.346008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:24.484 [2024-10-09 07:59:26.346019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.484 [2024-10-09 07:59:26.346053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.485 [2024-10-09 07:59:26.346068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:24.485 [2024-10-09 07:59:26.346080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:24.485 [2024-10-09 07:59:26.346091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.485 [2024-10-09 07:59:26.346132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:24.485 [2024-10-09 07:59:26.346149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.485 [2024-10-09 07:59:26.346164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:24.485 [2024-10-09 07:59:26.346175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:24.485 [2024-10-09 07:59:26.346187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.485 [2024-10-09 07:59:26.378109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.485 [2024-10-09 07:59:26.378184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:24.485 [2024-10-09 07:59:26.378205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.887 ms 00:19:24.485 [2024-10-09 07:59:26.378217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.485 [2024-10-09 07:59:26.378454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.485 [2024-10-09 07:59:26.378478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:24.485 [2024-10-09 07:59:26.378491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:24.485 [2024-10-09 07:59:26.378503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.485 [2024-10-09 07:59:26.379667] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.485 [2024-10-09 07:59:26.383932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.218 ms, result 0 00:19:24.485 [2024-10-09 07:59:26.384736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.485 [2024-10-09 07:59:26.401792] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.469  [2024-10-09T07:59:28.415Z] Copying: 27/256 [MB] (27 MBps) [2024-10-09T07:59:29.791Z] Copying: 50/256 [MB] (23 MBps) [2024-10-09T07:59:30.725Z] Copying: 73/256 [MB] (23 MBps) [2024-10-09T07:59:31.660Z] Copying: 96/256 [MB] (22 MBps) [2024-10-09T07:59:32.594Z] Copying: 119/256 [MB] (23 MBps) [2024-10-09T07:59:33.530Z] Copying: 141/256 [MB] (22 MBps) [2024-10-09T07:59:34.469Z] Copying: 166/256 [MB] (24 MBps) [2024-10-09T07:59:35.845Z] Copying: 190/256 [MB] (24 MBps) [2024-10-09T07:59:36.412Z] Copying: 214/256 [MB] (23 MBps) [2024-10-09T07:59:37.348Z] Copying: 237/256 [MB] (23 MBps) [2024-10-09T07:59:37.348Z] Copying: 256/256 [MB] (average 23 MBps)[2024-10-09 07:59:37.199744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.336 [2024-10-09 07:59:37.212224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.212434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:35.336 [2024-10-09 07:59:37.212466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.336 [2024-10-09 07:59:37.212481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.212522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:35.336 [2024-10-09 07:59:37.215931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.216117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:35.336 [2024-10-09 07:59:37.216145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:19:35.336 [2024-10-09 07:59:37.216159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.216481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.216512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:35.336 [2024-10-09 07:59:37.216525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:35.336 [2024-10-09 07:59:37.216536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.220400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.220434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:35.336 [2024-10-09 07:59:37.220449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:19:35.336 [2024-10-09 07:59:37.220460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.228136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.228171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:35.336 [2024-10-09 07:59:37.228199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.650 ms 00:19:35.336 [2024-10-09 07:59:37.228210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.260224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.260269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:35.336 [2024-10-09 07:59:37.260303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.928 ms 00:19:35.336 [2024-10-09 07:59:37.260315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.278456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.278505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:35.336 [2024-10-09 07:59:37.278523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.036 ms 00:19:35.336 [2024-10-09 07:59:37.278535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.278714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.278735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:35.336 [2024-10-09 07:59:37.278749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:35.336 [2024-10-09 07:59:37.278777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.312148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.312347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:35.336 [2024-10-09 07:59:37.312384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.345 ms 00:19:35.336 [2024-10-09 07:59:37.312398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.336 [2024-10-09 07:59:37.345024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.336 [2024-10-09 07:59:37.345076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:35.336 [2024-10-09 07:59:37.345096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.570 ms 00:19:35.336 [2024-10-09 07:59:37.345108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.596 [2024-10-09 07:59:37.377113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.596 [2024-10-09 07:59:37.377158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:35.596 [2024-10-09 07:59:37.377193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.949 ms 00:19:35.596 [2024-10-09 07:59:37.377204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.596 [2024-10-09 07:59:37.409368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.596 [2024-10-09 07:59:37.409545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.596 [2024-10-09 07:59:37.409574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.062 ms 00:19:35.596 [2024-10-09 07:59:37.409587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.596 [2024-10-09 07:59:37.409649] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.596 [2024-10-09 07:59:37.409676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.409993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.596 [2024-10-09 07:59:37.410292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.597 [2024-10-09 07:59:37.410902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.597 [2024-10-09 07:59:37.410913] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a716a08-2588-4711-9bfe-c66b02b59b71 00:19:35.597 [2024-10-09 07:59:37.410925] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:35.597 [2024-10-09 07:59:37.410935] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:35.597 [2024-10-09 07:59:37.410957] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:35.597 [2024-10-09 07:59:37.410969] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:35.597 [2024-10-09 07:59:37.410979] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.597 [2024-10-09 07:59:37.410990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.597 [2024-10-09 07:59:37.411000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.597 [2024-10-09 07:59:37.411010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.597 [2024-10-09 07:59:37.411020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.597 [2024-10-09 07:59:37.411032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.597 [2024-10-09 07:59:37.411043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.597 [2024-10-09 07:59:37.411055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:19:35.597 [2024-10-09 07:59:37.411066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.597 [2024-10-09 07:59:37.427800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.597 [2024-10-09 07:59:37.427854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.597 [2024-10-09 07:59:37.427872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.704 ms 00:19:35.597 [2024-10-09 07:59:37.427884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.597 [2024-10-09 07:59:37.428375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.597 [2024-10-09 07:59:37.428400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.597 [2024-10-09 07:59:37.428414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:19:35.597 [2024-10-09 07:59:37.428425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.597 [2024-10-09 07:59:37.469298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.597 [2024-10-09 07:59:37.469376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.597 [2024-10-09 07:59:37.469395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.597 [2024-10-09 07:59:37.469407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.597 [2024-10-09 07:59:37.469567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.597 [2024-10-09 07:59:37.469587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.597 [2024-10-09 07:59:37.469599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.597 [2024-10-09 07:59:37.469610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.597 [2024-10-09 07:59:37.469677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.597 [2024-10-09 07:59:37.469710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.597 [2024-10-09 07:59:37.469722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.597 [2024-10-09 07:59:37.469733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.597 [2024-10-09 07:59:37.469759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.597 [2024-10-09 07:59:37.469773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.597 [2024-10-09 07:59:37.469784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.597 [2024-10-09 07:59:37.469795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.597 [2024-10-09 07:59:37.574183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.597 [2024-10-09 07:59:37.574256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.597 [2024-10-09 07:59:37.574276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.597 [2024-10-09 07:59:37.574287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.661760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.856 [2024-10-09 07:59:37.662015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.856 [2024-10-09 07:59:37.662048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.856 [2024-10-09 07:59:37.662061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.662149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.856 [2024-10-09 07:59:37.662167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.856 [2024-10-09 07:59:37.662206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.856 [2024-10-09 07:59:37.662217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.662254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.856 [2024-10-09 07:59:37.662267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.856 [2024-10-09 07:59:37.662279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.856 [2024-10-09 07:59:37.662290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.662449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.856 [2024-10-09 07:59:37.662473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.856 [2024-10-09 07:59:37.662486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.856 [2024-10-09 07:59:37.662511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.662571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.856 [2024-10-09 07:59:37.662591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.856 [2024-10-09 07:59:37.662603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.856 [2024-10-09 07:59:37.662614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.662663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.856 [2024-10-09 07:59:37.662678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.856 [2024-10-09 07:59:37.662689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.856 [2024-10-09 07:59:37.662714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.662770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.856 [2024-10-09 07:59:37.662788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.856 [2024-10-09 07:59:37.662800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.856 [2024-10-09 07:59:37.662811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.856 [2024-10-09 07:59:37.663002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.754 ms, result 0 00:19:36.791 00:19:36.791 00:19:36.791 07:59:38 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:36.791 07:59:38 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:37.727 07:59:39 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.727 [2024-10-09 07:59:39.467452] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:19:37.727 [2024-10-09 07:59:39.467615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76571 ] 00:19:37.727 [2024-10-09 07:59:39.629210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.985 [2024-10-09 07:59:39.838892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.244 [2024-10-09 07:59:40.159674] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.244 [2024-10-09 07:59:40.159758] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.503 [2024-10-09 07:59:40.321891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.321958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.503 [2024-10-09 07:59:40.321983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.503 [2024-10-09 07:59:40.321995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.325326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.325388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.503 [2024-10-09 07:59:40.325406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.300 ms 00:19:38.503 [2024-10-09 07:59:40.325418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.325553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.503 [2024-10-09 07:59:40.326501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.503 [2024-10-09 07:59:40.326545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.326561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.503 [2024-10-09 07:59:40.326580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:19:38.503 [2024-10-09 07:59:40.326591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.327856] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.503 [2024-10-09 07:59:40.346087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.346140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.503 [2024-10-09 07:59:40.346160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.230 ms 00:19:38.503 [2024-10-09 07:59:40.346172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.346319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.346367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.503 [2024-10-09 07:59:40.346390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:38.503 [2024-10-09 07:59:40.346402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.350852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.350901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.503 [2024-10-09 07:59:40.350920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.385 ms 00:19:38.503 [2024-10-09 07:59:40.350931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.351079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.351106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.503 [2024-10-09 07:59:40.351119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:38.503 [2024-10-09 07:59:40.351130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.351171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.351187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.503 [2024-10-09 07:59:40.351199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:38.503 [2024-10-09 07:59:40.351210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.351241] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:38.503 [2024-10-09 07:59:40.355641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.355684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.503 [2024-10-09 07:59:40.355700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.404 ms 00:19:38.503 [2024-10-09 07:59:40.355711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.355787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.355812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.503 [2024-10-09 07:59:40.355825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:38.503 [2024-10-09 07:59:40.355836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.355868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.503 [2024-10-09 07:59:40.355899] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:38.503 [2024-10-09 07:59:40.355945] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.503 [2024-10-09 07:59:40.355965] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:38.503 [2024-10-09 07:59:40.356084] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.503 [2024-10-09 07:59:40.356100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.503 [2024-10-09 07:59:40.356115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.503 [2024-10-09 07:59:40.356130] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356143] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356155] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:38.503 [2024-10-09 07:59:40.356166] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.503 [2024-10-09 07:59:40.356177] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.503 [2024-10-09 07:59:40.356188] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.503 [2024-10-09 07:59:40.356200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.356211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.503 [2024-10-09 07:59:40.356228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:19:38.503 [2024-10-09 07:59:40.356239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.356393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.503 [2024-10-09 07:59:40.356415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.503 [2024-10-09 07:59:40.356428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:38.503 [2024-10-09 07:59:40.356439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.503 [2024-10-09 07:59:40.356557] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.503 [2024-10-09 07:59:40.356575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.503 [2024-10-09 07:59:40.356587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.503 [2024-10-09 07:59:40.356627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.503 [2024-10-09 07:59:40.356660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.503 [2024-10-09 07:59:40.356681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.503 [2024-10-09 07:59:40.356705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:38.503 [2024-10-09 07:59:40.356716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.503 [2024-10-09 07:59:40.356726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.503 [2024-10-09 07:59:40.356737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:38.503 [2024-10-09 07:59:40.356746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.503 [2024-10-09 07:59:40.356767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.503 [2024-10-09 07:59:40.356797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.503 [2024-10-09 07:59:40.356827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.503 [2024-10-09 07:59:40.356857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.503 [2024-10-09 07:59:40.356887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.503 [2024-10-09 07:59:40.356907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.503 [2024-10-09 07:59:40.356917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.503 [2024-10-09 07:59:40.356938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.503 [2024-10-09 07:59:40.356948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:38.503 [2024-10-09 07:59:40.356958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.503 [2024-10-09 07:59:40.356968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.503 [2024-10-09 07:59:40.356978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:38.503 [2024-10-09 07:59:40.356988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.503 [2024-10-09 07:59:40.356998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.503 [2024-10-09 07:59:40.357008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:38.503 [2024-10-09 07:59:40.357021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.503 [2024-10-09 07:59:40.357032] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.503 [2024-10-09 07:59:40.357043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.503 [2024-10-09 07:59:40.357054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.503 [2024-10-09 07:59:40.357065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.503 [2024-10-09 07:59:40.357076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.503 [2024-10-09 07:59:40.357087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.503 [2024-10-09 07:59:40.357097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.503 [2024-10-09 07:59:40.357107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.503 [2024-10-09 07:59:40.357117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.503 [2024-10-09 07:59:40.357128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.503 [2024-10-09 07:59:40.357140] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.503 [2024-10-09 07:59:40.357171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.503 [2024-10-09 07:59:40.357189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:38.503 [2024-10-09 07:59:40.357204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:38.504 [2024-10-09 07:59:40.357215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:38.504 [2024-10-09 07:59:40.357227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:38.504 [2024-10-09 07:59:40.357237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:38.504 [2024-10-09 07:59:40.357253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:38.504 [2024-10-09 07:59:40.357263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:38.504 [2024-10-09 07:59:40.357274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:38.504 [2024-10-09 07:59:40.357285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:38.504 [2024-10-09 07:59:40.357296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:38.504 [2024-10-09 07:59:40.357307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:38.504 [2024-10-09 07:59:40.357318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:38.504 [2024-10-09 07:59:40.357348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:38.504 [2024-10-09 07:59:40.357369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:38.504 [2024-10-09 07:59:40.357381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.504 [2024-10-09 07:59:40.357394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.504 [2024-10-09 07:59:40.357405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.504 [2024-10-09 07:59:40.357416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.504 [2024-10-09 07:59:40.357427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.504 [2024-10-09 07:59:40.357440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.504 [2024-10-09 07:59:40.357452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.357470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.504 [2024-10-09 07:59:40.357482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:19:38.504 [2024-10-09 07:59:40.357493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.400304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.400383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.504 [2024-10-09 07:59:40.400405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.734 ms 00:19:38.504 [2024-10-09 07:59:40.400418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.400630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.400652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.504 [2024-10-09 07:59:40.400665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:38.504 [2024-10-09 07:59:40.400677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.442053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.442132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.504 [2024-10-09 07:59:40.442151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.341 ms 00:19:38.504 [2024-10-09 07:59:40.442163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.442326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.442361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.504 [2024-10-09 07:59:40.442378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:38.504 [2024-10-09 07:59:40.442389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.442734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.442753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.504 [2024-10-09 07:59:40.442767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:38.504 [2024-10-09 07:59:40.442778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.442940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.442959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.504 [2024-10-09 07:59:40.442971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:38.504 [2024-10-09 07:59:40.442981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.459697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.459740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.504 [2024-10-09 07:59:40.459758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.685 ms 00:19:38.504 [2024-10-09 07:59:40.459770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.476409] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:38.504 [2024-10-09 07:59:40.476454] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.504 [2024-10-09 07:59:40.476473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.476485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.504 [2024-10-09 07:59:40.476498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.533 ms 00:19:38.504 [2024-10-09 07:59:40.476509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.504 [2024-10-09 07:59:40.506791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.504 [2024-10-09 07:59:40.506833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.504 [2024-10-09 07:59:40.506859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.178 ms 00:19:38.504 [2024-10-09 07:59:40.506871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.523117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.523173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.763 [2024-10-09 07:59:40.523190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.142 ms 00:19:38.763 [2024-10-09 07:59:40.523202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.539192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.539231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.763 [2024-10-09 07:59:40.539246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.890 ms 00:19:38.763 [2024-10-09 07:59:40.539258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.540139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.540176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.763 [2024-10-09 07:59:40.540191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:19:38.763 [2024-10-09 07:59:40.540203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.613984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.614055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.763 [2024-10-09 07:59:40.614074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.742 ms 00:19:38.763 [2024-10-09 07:59:40.614086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.627062] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:38.763 [2024-10-09 07:59:40.642074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.642156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.763 [2024-10-09 07:59:40.642178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.815 ms 00:19:38.763 [2024-10-09 07:59:40.642190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.642358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.642382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.763 [2024-10-09 07:59:40.642396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:38.763 [2024-10-09 07:59:40.642408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.642487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.642503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.763 [2024-10-09 07:59:40.642515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:38.763 [2024-10-09 07:59:40.642526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.763 [2024-10-09 07:59:40.642560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.763 [2024-10-09 07:59:40.642575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.764 [2024-10-09 07:59:40.642587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:38.764 [2024-10-09 07:59:40.642598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.764 [2024-10-09 07:59:40.642639] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:38.764 [2024-10-09 07:59:40.642656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.764 [2024-10-09 07:59:40.642671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:38.764 [2024-10-09 07:59:40.642683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:38.764 [2024-10-09 07:59:40.642694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.764 [2024-10-09 07:59:40.674680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.764 [2024-10-09 07:59:40.674724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.764 [2024-10-09 07:59:40.674741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.954 ms 00:19:38.764 [2024-10-09 07:59:40.674754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.764 [2024-10-09 07:59:40.674926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.764 [2024-10-09 07:59:40.674967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.764 [2024-10-09 07:59:40.674983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:38.764 [2024-10-09 07:59:40.674995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.764 [2024-10-09 07:59:40.676065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.764 [2024-10-09 07:59:40.680574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.813 ms, result 0 00:19:38.764 [2024-10-09 07:59:40.681470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.764 [2024-10-09 07:59:40.698531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:39.023  [2024-10-09T07:59:41.035Z] Copying: 4096/4096 [kB] (average 24 MBps)[2024-10-09 07:59:40.866953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.023 [2024-10-09 07:59:40.879429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.879476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:39.023 [2024-10-09 07:59:40.879495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:39.023 [2024-10-09 07:59:40.879507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.879540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:39.023 [2024-10-09 07:59:40.882954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.882990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:39.023 [2024-10-09 07:59:40.883005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.392 ms 00:19:39.023 [2024-10-09 07:59:40.883016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.884713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.884762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:39.023 [2024-10-09 07:59:40.884779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:19:39.023 [2024-10-09 07:59:40.884791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.888831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.888872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:39.023 [2024-10-09 07:59:40.888888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.015 ms 00:19:39.023 [2024-10-09 07:59:40.888900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.896492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.896537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:39.023 [2024-10-09 07:59:40.896552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.547 ms 00:19:39.023 [2024-10-09 07:59:40.896563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.928951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.929043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:39.023 [2024-10-09 07:59:40.929072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.301 ms 00:19:39.023 [2024-10-09 07:59:40.929084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.947775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.947828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:39.023 [2024-10-09 07:59:40.947854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.572 ms 00:19:39.023 [2024-10-09 07:59:40.947866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.948044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.948066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:39.023 [2024-10-09 07:59:40.948079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:39.023 [2024-10-09 07:59:40.948100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:40.980546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:40.980603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:39.023 [2024-10-09 07:59:40.980620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.420 ms 00:19:39.023 [2024-10-09 07:59:40.980631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.023 [2024-10-09 07:59:41.012311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.023 [2024-10-09 07:59:41.012366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:39.023 [2024-10-09 07:59:41.012385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.605 ms 00:19:39.023 [2024-10-09 07:59:41.012397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.282 [2024-10-09 07:59:41.045246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.282 [2024-10-09 07:59:41.045328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:39.282 [2024-10-09 07:59:41.045356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.774 ms 00:19:39.282 [2024-10-09 07:59:41.045370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.282 [2024-10-09 07:59:41.077284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.282 [2024-10-09 07:59:41.077337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:39.282 [2024-10-09 07:59:41.077358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.774 ms 00:19:39.282 [2024-10-09 07:59:41.077370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.283 [2024-10-09 07:59:41.077466] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:39.283 [2024-10-09 07:59:41.077494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.077992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:39.283 [2024-10-09 07:59:41.078589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:39.284 [2024-10-09 07:59:41.078712] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:39.284 [2024-10-09 07:59:41.078723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a716a08-2588-4711-9bfe-c66b02b59b71 00:19:39.284 [2024-10-09 07:59:41.078735] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:39.284 [2024-10-09 07:59:41.078746] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:39.284 [2024-10-09 07:59:41.078762] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:39.284 [2024-10-09 07:59:41.078774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:39.284 [2024-10-09 07:59:41.078785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:39.284 [2024-10-09 07:59:41.078796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:39.284 [2024-10-09 07:59:41.078806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:39.284 [2024-10-09 07:59:41.078817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:39.284 [2024-10-09 07:59:41.078827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:39.284 [2024-10-09 07:59:41.078838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.284 [2024-10-09 07:59:41.078849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:39.284 [2024-10-09 07:59:41.078862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:19:39.284 [2024-10-09 07:59:41.078873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.284 [2024-10-09 07:59:41.096174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.284 [2024-10-09 07:59:41.096216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:39.284 [2024-10-09 07:59:41.096234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.271 ms 00:19:39.284 [2024-10-09 07:59:41.096245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.284 [2024-10-09 07:59:41.096726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.284 [2024-10-09 07:59:41.096750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:39.284 [2024-10-09 07:59:41.096764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:39.284 [2024-10-09 07:59:41.096774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.284 [2024-10-09 07:59:41.137926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.284 [2024-10-09 07:59:41.138016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.284 [2024-10-09 07:59:41.138036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.284 [2024-10-09 07:59:41.138048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.284 [2024-10-09 07:59:41.138162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.284 [2024-10-09 07:59:41.138178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.284 [2024-10-09 07:59:41.138191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.284 [2024-10-09 07:59:41.138202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.284 [2024-10-09 07:59:41.138273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.284 [2024-10-09 07:59:41.138294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.284 [2024-10-09 07:59:41.138307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.284 [2024-10-09 07:59:41.138318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.284 [2024-10-09 07:59:41.138343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.284 [2024-10-09 07:59:41.138380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.284 [2024-10-09 07:59:41.138392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.284 [2024-10-09 07:59:41.138413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.284 [2024-10-09 07:59:41.242768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.284 [2024-10-09 07:59:41.242852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.284 [2024-10-09 07:59:41.242871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.284 [2024-10-09 07:59:41.242883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.328578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.542 [2024-10-09 07:59:41.328646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.542 [2024-10-09 07:59:41.328665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.542 [2024-10-09 07:59:41.328677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.328765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.542 [2024-10-09 07:59:41.328792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.542 [2024-10-09 07:59:41.328805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.542 [2024-10-09 07:59:41.328816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.328851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.542 [2024-10-09 07:59:41.328865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.542 [2024-10-09 07:59:41.328876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.542 [2024-10-09 07:59:41.328887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.329016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.542 [2024-10-09 07:59:41.329036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.542 [2024-10-09 07:59:41.329056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.542 [2024-10-09 07:59:41.329066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.329118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.542 [2024-10-09 07:59:41.329137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:39.542 [2024-10-09 07:59:41.329148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.542 [2024-10-09 07:59:41.329159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.329208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.542 [2024-10-09 07:59:41.329223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.542 [2024-10-09 07:59:41.329235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.542 [2024-10-09 07:59:41.329253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.329308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.542 [2024-10-09 07:59:41.329325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.542 [2024-10-09 07:59:41.329357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.542 [2024-10-09 07:59:41.329370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.542 [2024-10-09 07:59:41.329547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.107 ms, result 0 00:19:40.478 00:19:40.478 00:19:40.478 07:59:42 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76602 00:19:40.478 07:59:42 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76602 00:19:40.478 07:59:42 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:40.478 07:59:42 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76602 ']' 00:19:40.478 07:59:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.478 07:59:42 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.478 07:59:42 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.478 07:59:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.478 07:59:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:40.737 [2024-10-09 07:59:42.574590] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:19:40.737 [2024-10-09 07:59:42.574845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76602 ] 00:19:40.995 [2024-10-09 07:59:42.766504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.995 [2024-10-09 07:59:42.954200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.943 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.943 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:41.943 07:59:43 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:42.201 [2024-10-09 07:59:44.039526] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.201 [2024-10-09 07:59:44.039620] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.460 [2024-10-09 07:59:44.228466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.228558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:42.461 [2024-10-09 07:59:44.228584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:42.461 [2024-10-09 07:59:44.228598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.232955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.233017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.461 [2024-10-09 07:59:44.233060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:19:42.461 [2024-10-09 07:59:44.233073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.233217] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:42.461 [2024-10-09 07:59:44.234174] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:42.461 [2024-10-09 07:59:44.234225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.234241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.461 [2024-10-09 07:59:44.234258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:19:42.461 [2024-10-09 07:59:44.234270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.235561] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:42.461 [2024-10-09 07:59:44.252692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.252756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:42.461 [2024-10-09 07:59:44.252778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.137 ms 00:19:42.461 [2024-10-09 07:59:44.252799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.252947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.252985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:42.461 [2024-10-09 07:59:44.253002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:42.461 [2024-10-09 07:59:44.253021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.257498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.257575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.461 [2024-10-09 07:59:44.257596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.399 ms 00:19:42.461 [2024-10-09 07:59:44.257616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.257824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.257856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.461 [2024-10-09 07:59:44.257873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:19:42.461 [2024-10-09 07:59:44.257892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.257934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.257960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:42.461 [2024-10-09 07:59:44.257976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:42.461 [2024-10-09 07:59:44.257995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.258034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:42.461 [2024-10-09 07:59:44.262417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.262458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.461 [2024-10-09 07:59:44.262484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.386 ms 00:19:42.461 [2024-10-09 07:59:44.262505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.262593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.262614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:42.461 [2024-10-09 07:59:44.262635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:42.461 [2024-10-09 07:59:44.262650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.262689] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:42.461 [2024-10-09 07:59:44.262723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:42.461 [2024-10-09 07:59:44.262788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:42.461 [2024-10-09 07:59:44.262822] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:42.461 [2024-10-09 07:59:44.262953] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:42.461 [2024-10-09 07:59:44.262982] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:42.461 [2024-10-09 07:59:44.263010] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:42.461 [2024-10-09 07:59:44.263029] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263051] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263067] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:42.461 [2024-10-09 07:59:44.263085] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:42.461 [2024-10-09 07:59:44.263099] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:42.461 [2024-10-09 07:59:44.263121] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:42.461 [2024-10-09 07:59:44.263142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.263161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:42.461 [2024-10-09 07:59:44.263176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:19:42.461 [2024-10-09 07:59:44.263191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.263320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.461 [2024-10-09 07:59:44.263371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:42.461 [2024-10-09 07:59:44.263387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:42.461 [2024-10-09 07:59:44.263402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.461 [2024-10-09 07:59:44.263519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:42.461 [2024-10-09 07:59:44.263561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:42.461 [2024-10-09 07:59:44.263576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:42.461 [2024-10-09 07:59:44.263648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:42.461 [2024-10-09 07:59:44.263704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.461 [2024-10-09 07:59:44.263729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:42.461 [2024-10-09 07:59:44.263743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:42.461 [2024-10-09 07:59:44.263754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.461 [2024-10-09 07:59:44.263768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:42.461 [2024-10-09 07:59:44.263780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:42.461 [2024-10-09 07:59:44.263794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:42.461 [2024-10-09 07:59:44.263819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:42.461 [2024-10-09 07:59:44.263870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:42.461 [2024-10-09 07:59:44.263910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:42.461 [2024-10-09 07:59:44.263946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.461 [2024-10-09 07:59:44.263971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:42.461 [2024-10-09 07:59:44.263985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:42.461 [2024-10-09 07:59:44.263996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.461 [2024-10-09 07:59:44.264012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:42.461 [2024-10-09 07:59:44.264024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:42.461 [2024-10-09 07:59:44.264037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.461 [2024-10-09 07:59:44.264049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:42.461 [2024-10-09 07:59:44.264065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:42.461 [2024-10-09 07:59:44.264076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.461 [2024-10-09 07:59:44.264090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:42.461 [2024-10-09 07:59:44.264102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:42.461 [2024-10-09 07:59:44.264117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.461 [2024-10-09 07:59:44.264129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:42.461 [2024-10-09 07:59:44.264142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:42.461 [2024-10-09 07:59:44.264153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.461 [2024-10-09 07:59:44.264166] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:42.461 [2024-10-09 07:59:44.264179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:42.461 [2024-10-09 07:59:44.264193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.461 [2024-10-09 07:59:44.264205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.462 [2024-10-09 07:59:44.264228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:42.462 [2024-10-09 07:59:44.264243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:42.462 [2024-10-09 07:59:44.264261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:42.462 [2024-10-09 07:59:44.264276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:42.462 [2024-10-09 07:59:44.264293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:42.462 [2024-10-09 07:59:44.264307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:42.462 [2024-10-09 07:59:44.264327] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:42.462 [2024-10-09 07:59:44.264362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.462 [2024-10-09 07:59:44.264388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:42.462 [2024-10-09 07:59:44.264403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:42.462 [2024-10-09 07:59:44.264423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:42.462 [2024-10-09 07:59:44.264438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:42.462 [2024-10-09 07:59:44.264456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:42.462 [2024-10-09 07:59:44.264470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:42.462 [2024-10-09 07:59:44.264488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:42.462 [2024-10-09 07:59:44.264502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:42.462 [2024-10-09 07:59:44.264520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:42.462 [2024-10-09 07:59:44.264534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:42.462 [2024-10-09 07:59:44.264552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:42.462 [2024-10-09 07:59:44.264566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:42.462 [2024-10-09 07:59:44.264585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:42.462 [2024-10-09 07:59:44.264599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:42.462 [2024-10-09 07:59:44.264617] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:42.462 [2024-10-09 07:59:44.264633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.462 [2024-10-09 07:59:44.264666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:42.462 [2024-10-09 07:59:44.264681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:42.462 [2024-10-09 07:59:44.264695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:42.462 [2024-10-09 07:59:44.264708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:42.462 [2024-10-09 07:59:44.264725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.264738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:42.462 [2024-10-09 07:59:44.264753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:19:42.462 [2024-10-09 07:59:44.264765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.300787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.300870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.462 [2024-10-09 07:59:44.300912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.937 ms 00:19:42.462 [2024-10-09 07:59:44.300927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.301128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.301149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:42.462 [2024-10-09 07:59:44.301165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:42.462 [2024-10-09 07:59:44.301178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.355576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.355685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.462 [2024-10-09 07:59:44.355746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.351 ms 00:19:42.462 [2024-10-09 07:59:44.355767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.355984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.356013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.462 [2024-10-09 07:59:44.356043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:42.462 [2024-10-09 07:59:44.356070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.356509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.356556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.462 [2024-10-09 07:59:44.356588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:19:42.462 [2024-10-09 07:59:44.356608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.356831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.356864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.462 [2024-10-09 07:59:44.356887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:19:42.462 [2024-10-09 07:59:44.356905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.382586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.382667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.462 [2024-10-09 07:59:44.382700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.624 ms 00:19:42.462 [2024-10-09 07:59:44.382723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.405883] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:42.462 [2024-10-09 07:59:44.405956] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:42.462 [2024-10-09 07:59:44.405996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.406019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:42.462 [2024-10-09 07:59:44.406050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.926 ms 00:19:42.462 [2024-10-09 07:59:44.406071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.462 [2024-10-09 07:59:44.464035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.462 [2024-10-09 07:59:44.464110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:42.462 [2024-10-09 07:59:44.464140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.759 ms 00:19:42.462 [2024-10-09 07:59:44.464173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.480668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.480757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:42.721 [2024-10-09 07:59:44.480787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.272 ms 00:19:42.721 [2024-10-09 07:59:44.480802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.497633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.497901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:42.721 [2024-10-09 07:59:44.497942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.658 ms 00:19:42.721 [2024-10-09 07:59:44.497957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.498900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.498939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:42.721 [2024-10-09 07:59:44.498966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:19:42.721 [2024-10-09 07:59:44.498982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.574162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.574257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:42.721 [2024-10-09 07:59:44.574290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.121 ms 00:19:42.721 [2024-10-09 07:59:44.574313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.587204] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:42.721 [2024-10-09 07:59:44.601415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.601525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:42.721 [2024-10-09 07:59:44.601552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.901 ms 00:19:42.721 [2024-10-09 07:59:44.601569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.601720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.601746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:42.721 [2024-10-09 07:59:44.601760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:42.721 [2024-10-09 07:59:44.601775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.601847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.601867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:42.721 [2024-10-09 07:59:44.601882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:42.721 [2024-10-09 07:59:44.601897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.601932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.601950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:42.721 [2024-10-09 07:59:44.601966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:42.721 [2024-10-09 07:59:44.601988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.602033] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:42.721 [2024-10-09 07:59:44.602058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.602070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:42.721 [2024-10-09 07:59:44.602086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:42.721 [2024-10-09 07:59:44.602098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.634180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.634460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:42.721 [2024-10-09 07:59:44.634501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.043 ms 00:19:42.721 [2024-10-09 07:59:44.634516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-10-09 07:59:44.634679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-10-09 07:59:44.634702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:42.722 [2024-10-09 07:59:44.634720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:42.722 [2024-10-09 07:59:44.634733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.722 [2024-10-09 07:59:44.635888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:42.722 [2024-10-09 07:59:44.640074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 406.995 ms, result 0 00:19:42.722 [2024-10-09 07:59:44.641143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.722 Some configs were skipped because the RPC state that can call them passed over. 00:19:42.722 07:59:44 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:42.980 [2024-10-09 07:59:44.979318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.980 [2024-10-09 07:59:44.979593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:42.980 [2024-10-09 07:59:44.979811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:19:42.980 [2024-10-09 07:59:44.979989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.980 [2024-10-09 07:59:44.980181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.348 ms, result 0 00:19:42.980 true 00:19:43.239 07:59:44 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:43.496 [2024-10-09 07:59:45.251185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.496 [2024-10-09 07:59:45.251420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:43.496 [2024-10-09 07:59:45.251564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:19:43.496 [2024-10-09 07:59:45.251766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.496 [2024-10-09 07:59:45.251883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.664 ms, result 0 00:19:43.496 true 00:19:43.496 07:59:45 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76602 00:19:43.496 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76602 ']' 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76602 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76602 00:19:43.497 killing process with pid 76602 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76602' 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76602 00:19:43.497 07:59:45 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76602 00:19:44.447 [2024-10-09 07:59:46.258844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.258927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:44.447 [2024-10-09 07:59:46.258950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.447 [2024-10-09 07:59:46.258965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.259000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:44.447 [2024-10-09 07:59:46.262356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.262404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:44.447 [2024-10-09 07:59:46.262428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:19:44.447 [2024-10-09 07:59:46.262441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.262750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.262771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:44.447 [2024-10-09 07:59:46.262787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:44.447 [2024-10-09 07:59:46.262803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.266934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.266981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:44.447 [2024-10-09 07:59:46.267004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:19:44.447 [2024-10-09 07:59:46.267017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.274588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.274629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:44.447 [2024-10-09 07:59:46.274649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:19:44.447 [2024-10-09 07:59:46.274665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.287456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.287506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:44.447 [2024-10-09 07:59:46.287532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.703 ms 00:19:44.447 [2024-10-09 07:59:46.287546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.296136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.296186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:44.447 [2024-10-09 07:59:46.296209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.498 ms 00:19:44.447 [2024-10-09 07:59:46.296240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.296430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.296453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:44.447 [2024-10-09 07:59:46.296471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:44.447 [2024-10-09 07:59:46.296487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.309300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.309380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:44.447 [2024-10-09 07:59:46.309406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.773 ms 00:19:44.447 [2024-10-09 07:59:46.309420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.322487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-10-09 07:59:46.322563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:44.447 [2024-10-09 07:59:46.322591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.954 ms 00:19:44.447 [2024-10-09 07:59:46.322604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-10-09 07:59:46.334949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.448 [2024-10-09 07:59:46.334995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:44.448 [2024-10-09 07:59:46.335024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.248 ms 00:19:44.448 [2024-10-09 07:59:46.335038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.448 [2024-10-09 07:59:46.347368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.448 [2024-10-09 07:59:46.347413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:44.448 [2024-10-09 07:59:46.347440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.208 ms 00:19:44.448 [2024-10-09 07:59:46.347454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.448 [2024-10-09 07:59:46.347558] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:44.448 [2024-10-09 07:59:46.347596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.347998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.348992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.349005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.349020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.349032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.349050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.349062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.349077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:44.448 [2024-10-09 07:59:46.349090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:44.449 [2024-10-09 07:59:46.349268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:44.449 [2024-10-09 07:59:46.349286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a716a08-2588-4711-9bfe-c66b02b59b71 00:19:44.449 [2024-10-09 07:59:46.349299] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:44.449 [2024-10-09 07:59:46.349313] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:44.449 [2024-10-09 07:59:46.349328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:44.449 [2024-10-09 07:59:46.349358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:44.449 [2024-10-09 07:59:46.349384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:44.449 [2024-10-09 07:59:46.349400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:44.449 [2024-10-09 07:59:46.349415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:44.449 [2024-10-09 07:59:46.349428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:44.449 [2024-10-09 07:59:46.349439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:44.449 [2024-10-09 07:59:46.349454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.449 [2024-10-09 07:59:46.349468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:44.449 [2024-10-09 07:59:46.349485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:19:44.449 [2024-10-09 07:59:46.349498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.449 [2024-10-09 07:59:46.366447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.449 [2024-10-09 07:59:46.366501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:44.449 [2024-10-09 07:59:46.366529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.894 ms 00:19:44.449 [2024-10-09 07:59:46.366548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.449 [2024-10-09 07:59:46.367078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.449 [2024-10-09 07:59:46.367114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:44.449 [2024-10-09 07:59:46.367134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:19:44.449 [2024-10-09 07:59:46.367147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.449 [2024-10-09 07:59:46.420704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.449 [2024-10-09 07:59:46.420775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.449 [2024-10-09 07:59:46.420800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.449 [2024-10-09 07:59:46.420817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.449 [2024-10-09 07:59:46.420962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.449 [2024-10-09 07:59:46.420981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.449 [2024-10-09 07:59:46.420997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.449 [2024-10-09 07:59:46.421010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.449 [2024-10-09 07:59:46.421088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.449 [2024-10-09 07:59:46.421109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.449 [2024-10-09 07:59:46.421128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.449 [2024-10-09 07:59:46.421140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.449 [2024-10-09 07:59:46.421173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.449 [2024-10-09 07:59:46.421188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.449 [2024-10-09 07:59:46.421203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.449 [2024-10-09 07:59:46.421216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.525949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.526012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.708 [2024-10-09 07:59:46.526041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.526057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.612067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.612143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.708 [2024-10-09 07:59:46.612167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.612181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.612298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.612318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.708 [2024-10-09 07:59:46.612363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.612381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.612423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.612456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.708 [2024-10-09 07:59:46.612476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.612491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.612635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.612657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.708 [2024-10-09 07:59:46.612678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.612692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.612759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.612780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:44.708 [2024-10-09 07:59:46.612810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.612824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.612883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.612907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.708 [2024-10-09 07:59:46.612932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.612948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.613015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.708 [2024-10-09 07:59:46.613040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.708 [2024-10-09 07:59:46.613060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.708 [2024-10-09 07:59:46.613074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.708 [2024-10-09 07:59:46.613255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.380 ms, result 0 00:19:46.083 07:59:47 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:46.083 [2024-10-09 07:59:47.802363] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:19:46.084 [2024-10-09 07:59:47.803244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76670 ] 00:19:46.084 [2024-10-09 07:59:47.978721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.342 [2024-10-09 07:59:48.205786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.601 [2024-10-09 07:59:48.524352] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.601 [2024-10-09 07:59:48.524428] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.860 [2024-10-09 07:59:48.686885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.860 [2024-10-09 07:59:48.686957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.860 [2024-10-09 07:59:48.686982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:46.860 [2024-10-09 07:59:48.686995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.860 [2024-10-09 07:59:48.690364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.860 [2024-10-09 07:59:48.690411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.860 [2024-10-09 07:59:48.690428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.339 ms 00:19:46.860 [2024-10-09 07:59:48.690441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.860 [2024-10-09 07:59:48.690577] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.860 [2024-10-09 07:59:48.691533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.860 [2024-10-09 07:59:48.691751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.860 [2024-10-09 07:59:48.691774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.860 [2024-10-09 07:59:48.691798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:19:46.860 [2024-10-09 07:59:48.691809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.860 [2024-10-09 07:59:48.693037] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:46.860 [2024-10-09 07:59:48.709534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.860 [2024-10-09 07:59:48.709593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:46.860 [2024-10-09 07:59:48.709612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.497 ms 00:19:46.861 [2024-10-09 07:59:48.709624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.709759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.709782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:46.861 [2024-10-09 07:59:48.709800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:46.861 [2024-10-09 07:59:48.709813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.714253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.714302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.861 [2024-10-09 07:59:48.714318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.376 ms 00:19:46.861 [2024-10-09 07:59:48.714342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.714504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.714533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.861 [2024-10-09 07:59:48.714547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:46.861 [2024-10-09 07:59:48.714559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.714600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.714616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.861 [2024-10-09 07:59:48.714629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:46.861 [2024-10-09 07:59:48.714640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.714673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:46.861 [2024-10-09 07:59:48.718968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.719005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.861 [2024-10-09 07:59:48.719021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.304 ms 00:19:46.861 [2024-10-09 07:59:48.719034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.719110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.719135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.861 [2024-10-09 07:59:48.719148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:46.861 [2024-10-09 07:59:48.719160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.719193] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:46.861 [2024-10-09 07:59:48.719221] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:46.861 [2024-10-09 07:59:48.719266] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:46.861 [2024-10-09 07:59:48.719287] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:46.861 [2024-10-09 07:59:48.719433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.861 [2024-10-09 07:59:48.719457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.861 [2024-10-09 07:59:48.719472] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.861 [2024-10-09 07:59:48.719488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.861 [2024-10-09 07:59:48.719501] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.861 [2024-10-09 07:59:48.719514] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:46.861 [2024-10-09 07:59:48.719525] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.861 [2024-10-09 07:59:48.719536] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.861 [2024-10-09 07:59:48.719547] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.861 [2024-10-09 07:59:48.719559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.719571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.861 [2024-10-09 07:59:48.719589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:19:46.861 [2024-10-09 07:59:48.719615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.719738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.861 [2024-10-09 07:59:48.719762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.861 [2024-10-09 07:59:48.719776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:46.861 [2024-10-09 07:59:48.719788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.861 [2024-10-09 07:59:48.719934] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.861 [2024-10-09 07:59:48.719953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.861 [2024-10-09 07:59:48.719966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.861 [2024-10-09 07:59:48.719986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.861 [2024-10-09 07:59:48.719999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.861 [2024-10-09 07:59:48.720011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.861 [2024-10-09 07:59:48.720047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.861 [2024-10-09 07:59:48.720069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.861 [2024-10-09 07:59:48.720093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:46.861 [2024-10-09 07:59:48.720104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.861 [2024-10-09 07:59:48.720115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.861 [2024-10-09 07:59:48.720131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:46.861 [2024-10-09 07:59:48.720142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.861 [2024-10-09 07:59:48.720164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.861 [2024-10-09 07:59:48.720196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.861 [2024-10-09 07:59:48.720227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.861 [2024-10-09 07:59:48.720258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.861 [2024-10-09 07:59:48.720289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.861 [2024-10-09 07:59:48.720321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.861 [2024-10-09 07:59:48.720362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.861 [2024-10-09 07:59:48.720373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:46.861 [2024-10-09 07:59:48.720384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.861 [2024-10-09 07:59:48.720395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.861 [2024-10-09 07:59:48.720406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:46.861 [2024-10-09 07:59:48.720417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.861 [2024-10-09 07:59:48.720439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:46.861 [2024-10-09 07:59:48.720449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.861 [2024-10-09 07:59:48.720471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.861 [2024-10-09 07:59:48.720482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.861 [2024-10-09 07:59:48.720505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.861 [2024-10-09 07:59:48.720516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.861 [2024-10-09 07:59:48.720526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.861 [2024-10-09 07:59:48.720537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.861 [2024-10-09 07:59:48.720548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.861 [2024-10-09 07:59:48.720559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.861 [2024-10-09 07:59:48.720571] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.861 [2024-10-09 07:59:48.720585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.861 [2024-10-09 07:59:48.720604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:46.861 [2024-10-09 07:59:48.720616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:46.861 [2024-10-09 07:59:48.720627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:46.861 [2024-10-09 07:59:48.720639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:46.861 [2024-10-09 07:59:48.720650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:46.862 [2024-10-09 07:59:48.720662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:46.862 [2024-10-09 07:59:48.720673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:46.862 [2024-10-09 07:59:48.720684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:46.862 [2024-10-09 07:59:48.720696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:46.862 [2024-10-09 07:59:48.720707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:46.862 [2024-10-09 07:59:48.720718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:46.862 [2024-10-09 07:59:48.720730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:46.862 [2024-10-09 07:59:48.720741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:46.862 [2024-10-09 07:59:48.720753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:46.862 [2024-10-09 07:59:48.720764] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.862 [2024-10-09 07:59:48.720777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.862 [2024-10-09 07:59:48.720794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.862 [2024-10-09 07:59:48.720807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.862 [2024-10-09 07:59:48.720819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.862 [2024-10-09 07:59:48.720830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.862 [2024-10-09 07:59:48.720843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.720859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.862 [2024-10-09 07:59:48.720871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:19:46.862 [2024-10-09 07:59:48.720883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.763408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.763466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.862 [2024-10-09 07:59:48.763486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.449 ms 00:19:46.862 [2024-10-09 07:59:48.763498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.763713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.763735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:46.862 [2024-10-09 07:59:48.763749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:46.862 [2024-10-09 07:59:48.763760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.804168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.804220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.862 [2024-10-09 07:59:48.804240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.372 ms 00:19:46.862 [2024-10-09 07:59:48.804252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.804437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.804461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.862 [2024-10-09 07:59:48.804475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.862 [2024-10-09 07:59:48.804487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.804824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.804851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.862 [2024-10-09 07:59:48.804866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:46.862 [2024-10-09 07:59:48.804878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.805038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.805059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.862 [2024-10-09 07:59:48.805072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:46.862 [2024-10-09 07:59:48.805083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.821608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.821658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.862 [2024-10-09 07:59:48.821675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.493 ms 00:19:46.862 [2024-10-09 07:59:48.821688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.838195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:46.862 [2024-10-09 07:59:48.838259] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:46.862 [2024-10-09 07:59:48.838280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.838293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:46.862 [2024-10-09 07:59:48.838307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.414 ms 00:19:46.862 [2024-10-09 07:59:48.838319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.862 [2024-10-09 07:59:48.868764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.862 [2024-10-09 07:59:48.868810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:46.862 [2024-10-09 07:59:48.868836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.312 ms 00:19:46.862 [2024-10-09 07:59:48.868849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:48.885021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:48.885078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:47.121 [2024-10-09 07:59:48.885097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.054 ms 00:19:47.121 [2024-10-09 07:59:48.885110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:48.901082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:48.901131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:47.121 [2024-10-09 07:59:48.901149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.840 ms 00:19:47.121 [2024-10-09 07:59:48.901161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:48.902035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:48.902074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:47.121 [2024-10-09 07:59:48.902090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:19:47.121 [2024-10-09 07:59:48.902102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:48.976781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:48.976868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:47.121 [2024-10-09 07:59:48.976889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.636 ms 00:19:47.121 [2024-10-09 07:59:48.976903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:48.990410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:47.121 [2024-10-09 07:59:49.005009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:49.005068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:47.121 [2024-10-09 07:59:49.005088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.900 ms 00:19:47.121 [2024-10-09 07:59:49.005101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:49.005244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:49.005265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:47.121 [2024-10-09 07:59:49.005281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:47.121 [2024-10-09 07:59:49.005292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:49.005404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:49.005437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:47.121 [2024-10-09 07:59:49.005451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:47.121 [2024-10-09 07:59:49.005462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:49.005502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:49.005518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:47.121 [2024-10-09 07:59:49.005530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:47.121 [2024-10-09 07:59:49.005542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:49.005584] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:47.121 [2024-10-09 07:59:49.005602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:49.005614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:47.121 [2024-10-09 07:59:49.005631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:47.121 [2024-10-09 07:59:49.005643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:49.037056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:49.037127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:47.121 [2024-10-09 07:59:49.037147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.375 ms 00:19:47.121 [2024-10-09 07:59:49.037159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:49.037397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.121 [2024-10-09 07:59:49.037425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:47.121 [2024-10-09 07:59:49.037439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:47.121 [2024-10-09 07:59:49.037451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.121 [2024-10-09 07:59:49.038731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:47.121 [2024-10-09 07:59:49.043351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.323 ms, result 0 00:19:47.121 [2024-10-09 07:59:49.044319] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.121 [2024-10-09 07:59:49.061227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.495  [2024-10-09T07:59:51.476Z] Copying: 26/256 [MB] (26 MBps) [2024-10-09T07:59:52.411Z] Copying: 51/256 [MB] (24 MBps) [2024-10-09T07:59:53.346Z] Copying: 75/256 [MB] (24 MBps) [2024-10-09T07:59:54.281Z] Copying: 100/256 [MB] (24 MBps) [2024-10-09T07:59:55.224Z] Copying: 125/256 [MB] (25 MBps) [2024-10-09T07:59:56.158Z] Copying: 150/256 [MB] (25 MBps) [2024-10-09T07:59:57.533Z] Copying: 173/256 [MB] (23 MBps) [2024-10-09T07:59:58.470Z] Copying: 199/256 [MB] (25 MBps) [2024-10-09T07:59:59.417Z] Copying: 222/256 [MB] (23 MBps) [2024-10-09T07:59:59.675Z] Copying: 248/256 [MB] (25 MBps) [2024-10-09T07:59:59.934Z] Copying: 256/256 [MB] (average 24 MBps)[2024-10-09 07:59:59.708096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.922 [2024-10-09 07:59:59.724416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.724476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.922 [2024-10-09 07:59:59.724513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:57.922 [2024-10-09 07:59:59.724548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.724606] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.922 [2024-10-09 07:59:59.728739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.728789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.922 [2024-10-09 07:59:59.728818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.070 ms 00:19:57.922 [2024-10-09 07:59:59.728844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.729347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.729394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.922 [2024-10-09 07:59:59.729435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:19:57.922 [2024-10-09 07:59:59.729460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.734145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.734191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.922 [2024-10-09 07:59:59.734220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.641 ms 00:19:57.922 [2024-10-09 07:59:59.734245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.743640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.743688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.922 [2024-10-09 07:59:59.743729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.347 ms 00:19:57.922 [2024-10-09 07:59:59.743754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.782563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.782639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.922 [2024-10-09 07:59:59.782687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.669 ms 00:19:57.922 [2024-10-09 07:59:59.782712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.804232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.804287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.922 [2024-10-09 07:59:59.804321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.320 ms 00:19:57.922 [2024-10-09 07:59:59.804362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.804660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.804713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.922 [2024-10-09 07:59:59.804741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:57.922 [2024-10-09 07:59:59.804767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.843222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.843284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.922 [2024-10-09 07:59:59.843316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.397 ms 00:19:57.922 [2024-10-09 07:59:59.843356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.880503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.880556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.922 [2024-10-09 07:59:59.880597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.041 ms 00:19:57.922 [2024-10-09 07:59:59.880617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.922 [2024-10-09 07:59:59.911585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.922 [2024-10-09 07:59:59.911671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.922 [2024-10-09 07:59:59.911702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.840 ms 00:19:57.922 [2024-10-09 07:59:59.911723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.185 [2024-10-09 07:59:59.943189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.185 [2024-10-09 07:59:59.943242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:58.185 [2024-10-09 07:59:59.943269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.203 ms 00:19:58.185 [2024-10-09 07:59:59.943290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.185 [2024-10-09 07:59:59.943455] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:58.185 [2024-10-09 07:59:59.943496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.943989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.944985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.945006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.945033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:58.185 [2024-10-09 07:59:59.945054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:58.186 [2024-10-09 07:59:59.945841] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:58.186 [2024-10-09 07:59:59.945880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a716a08-2588-4711-9bfe-c66b02b59b71 00:19:58.186 [2024-10-09 07:59:59.945903] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:58.186 [2024-10-09 07:59:59.945922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:58.186 [2024-10-09 07:59:59.945948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:58.186 [2024-10-09 07:59:59.945977] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:58.186 [2024-10-09 07:59:59.945997] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:58.186 [2024-10-09 07:59:59.946017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:58.186 [2024-10-09 07:59:59.946036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:58.186 [2024-10-09 07:59:59.946056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:58.186 [2024-10-09 07:59:59.946073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:58.186 [2024-10-09 07:59:59.946093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.186 [2024-10-09 07:59:59.946120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:58.186 [2024-10-09 07:59:59.946146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:19:58.186 [2024-10-09 07:59:59.946166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.186 [2024-10-09 07:59:59.964253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.186 [2024-10-09 07:59:59.964385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:58.186 [2024-10-09 07:59:59.964417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.038 ms 00:19:58.186 [2024-10-09 07:59:59.964435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.186 [2024-10-09 07:59:59.965070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.186 [2024-10-09 07:59:59.965111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:58.186 [2024-10-09 07:59:59.965139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:19:58.186 [2024-10-09 07:59:59.965160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.186 [2024-10-09 08:00:00.005905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.186 [2024-10-09 08:00:00.005973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.186 [2024-10-09 08:00:00.006001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.186 [2024-10-09 08:00:00.006021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.186 [2024-10-09 08:00:00.006190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.186 [2024-10-09 08:00:00.006219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.186 [2024-10-09 08:00:00.006241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.186 [2024-10-09 08:00:00.006261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.186 [2024-10-09 08:00:00.006387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.186 [2024-10-09 08:00:00.006438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.186 [2024-10-09 08:00:00.006462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.186 [2024-10-09 08:00:00.006483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.186 [2024-10-09 08:00:00.006525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.186 [2024-10-09 08:00:00.006561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.186 [2024-10-09 08:00:00.006583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.186 [2024-10-09 08:00:00.006604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.186 [2024-10-09 08:00:00.111386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.186 [2024-10-09 08:00:00.111457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.186 [2024-10-09 08:00:00.111484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.186 [2024-10-09 08:00:00.111503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.196989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.445 [2024-10-09 08:00:00.197063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.445 [2024-10-09 08:00:00.197091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.445 [2024-10-09 08:00:00.197111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.197228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.445 [2024-10-09 08:00:00.197255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.445 [2024-10-09 08:00:00.197274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.445 [2024-10-09 08:00:00.197304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.197382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.445 [2024-10-09 08:00:00.197422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.445 [2024-10-09 08:00:00.197445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.445 [2024-10-09 08:00:00.197465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.197645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.445 [2024-10-09 08:00:00.197686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.445 [2024-10-09 08:00:00.197712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.445 [2024-10-09 08:00:00.197741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.197830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.445 [2024-10-09 08:00:00.197858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.445 [2024-10-09 08:00:00.197880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.445 [2024-10-09 08:00:00.197907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.197982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.445 [2024-10-09 08:00:00.198019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.445 [2024-10-09 08:00:00.198043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.445 [2024-10-09 08:00:00.198061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.198153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.445 [2024-10-09 08:00:00.198189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.445 [2024-10-09 08:00:00.198211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.445 [2024-10-09 08:00:00.198236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.445 [2024-10-09 08:00:00.198511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.075 ms, result 0 00:19:59.380 00:19:59.380 00:19:59.380 08:00:01 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:59.945 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:59.945 08:00:01 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:59.945 08:00:01 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:59.945 08:00:01 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:59.945 08:00:01 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:59.945 08:00:01 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:00.203 08:00:01 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:00.203 Process with pid 76602 is not found 00:20:00.203 08:00:02 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76602 00:20:00.203 08:00:02 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76602 ']' 00:20:00.203 08:00:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76602 00:20:00.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76602) - No such process 00:20:00.203 08:00:02 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 76602 is not found' 00:20:00.203 00:20:00.203 real 1m11.154s 00:20:00.203 user 1m38.939s 00:20:00.203 sys 0m7.422s 00:20:00.203 ************************************ 00:20:00.203 END TEST ftl_trim 00:20:00.203 ************************************ 00:20:00.203 08:00:02 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.203 08:00:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:00.203 08:00:02 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:00.203 08:00:02 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:00.203 08:00:02 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.203 08:00:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:00.203 ************************************ 00:20:00.203 START TEST ftl_restore 00:20:00.203 ************************************ 00:20:00.203 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:00.203 * Looking for test storage... 00:20:00.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.203 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:00.203 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:20:00.203 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:00.203 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.203 08:00:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.462 08:00:02 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:00.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.462 --rc genhtml_branch_coverage=1 00:20:00.462 --rc genhtml_function_coverage=1 00:20:00.462 --rc genhtml_legend=1 00:20:00.462 --rc geninfo_all_blocks=1 00:20:00.462 --rc geninfo_unexecuted_blocks=1 00:20:00.462 00:20:00.462 ' 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:00.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.462 --rc genhtml_branch_coverage=1 00:20:00.462 --rc genhtml_function_coverage=1 00:20:00.462 --rc genhtml_legend=1 00:20:00.462 --rc geninfo_all_blocks=1 00:20:00.462 --rc geninfo_unexecuted_blocks=1 00:20:00.462 00:20:00.462 ' 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:00.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.462 --rc genhtml_branch_coverage=1 00:20:00.462 --rc genhtml_function_coverage=1 00:20:00.462 --rc genhtml_legend=1 00:20:00.462 --rc geninfo_all_blocks=1 00:20:00.462 --rc geninfo_unexecuted_blocks=1 00:20:00.462 00:20:00.462 ' 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:00.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.462 --rc genhtml_branch_coverage=1 00:20:00.462 --rc genhtml_function_coverage=1 00:20:00.462 --rc genhtml_legend=1 00:20:00.462 --rc geninfo_all_blocks=1 00:20:00.462 --rc geninfo_unexecuted_blocks=1 00:20:00.462 00:20:00.462 ' 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:00.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.ccFyQ7VC1Y 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76877 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76877 00:20:00.462 08:00:02 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76877 ']' 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.462 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:00.462 [2024-10-09 08:00:02.388729] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:20:00.462 [2024-10-09 08:00:02.389093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76877 ] 00:20:00.720 [2024-10-09 08:00:02.556249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.977 [2024-10-09 08:00:02.794750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.912 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.912 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:20:01.912 08:00:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:01.912 08:00:03 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:01.912 08:00:03 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:01.912 08:00:03 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:01.912 08:00:03 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:01.912 08:00:03 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:02.170 08:00:03 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:02.170 08:00:03 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:02.170 08:00:03 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:02.170 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:02.170 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:02.170 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:02.170 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:02.170 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:02.428 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:02.428 { 00:20:02.428 "name": "nvme0n1", 00:20:02.428 "aliases": [ 00:20:02.428 "45a42693-5fad-4f6e-9d95-4942d84c2951" 00:20:02.428 ], 00:20:02.428 "product_name": "NVMe disk", 00:20:02.428 "block_size": 4096, 00:20:02.428 "num_blocks": 1310720, 00:20:02.428 "uuid": "45a42693-5fad-4f6e-9d95-4942d84c2951", 00:20:02.428 "numa_id": -1, 00:20:02.428 "assigned_rate_limits": { 00:20:02.428 "rw_ios_per_sec": 0, 00:20:02.428 "rw_mbytes_per_sec": 0, 00:20:02.428 "r_mbytes_per_sec": 0, 00:20:02.428 "w_mbytes_per_sec": 0 00:20:02.428 }, 00:20:02.428 "claimed": true, 00:20:02.428 "claim_type": "read_many_write_one", 00:20:02.428 "zoned": false, 00:20:02.428 "supported_io_types": { 00:20:02.428 "read": true, 00:20:02.428 "write": true, 00:20:02.428 "unmap": true, 00:20:02.428 "flush": true, 00:20:02.428 "reset": true, 00:20:02.428 "nvme_admin": true, 00:20:02.428 "nvme_io": true, 00:20:02.428 "nvme_io_md": false, 00:20:02.428 "write_zeroes": true, 00:20:02.428 "zcopy": false, 00:20:02.428 "get_zone_info": false, 00:20:02.428 "zone_management": false, 00:20:02.428 "zone_append": false, 00:20:02.428 "compare": true, 00:20:02.428 "compare_and_write": false, 00:20:02.428 "abort": true, 00:20:02.428 "seek_hole": false, 00:20:02.428 "seek_data": false, 00:20:02.428 "copy": true, 00:20:02.428 "nvme_iov_md": false 00:20:02.428 }, 00:20:02.428 "driver_specific": { 00:20:02.428 "nvme": [ 00:20:02.428 { 00:20:02.428 "pci_address": "0000:00:11.0", 00:20:02.428 "trid": { 00:20:02.428 "trtype": "PCIe", 00:20:02.428 "traddr": "0000:00:11.0" 00:20:02.428 }, 00:20:02.428 "ctrlr_data": { 00:20:02.428 "cntlid": 0, 00:20:02.428 "vendor_id": "0x1b36", 00:20:02.428 "model_number": "QEMU NVMe Ctrl", 00:20:02.428 "serial_number": "12341", 00:20:02.428 "firmware_revision": "8.0.0", 00:20:02.428 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:02.428 "oacs": { 00:20:02.428 "security": 0, 00:20:02.428 "format": 1, 00:20:02.428 "firmware": 0, 00:20:02.428 "ns_manage": 1 00:20:02.428 }, 00:20:02.428 "multi_ctrlr": false, 00:20:02.428 "ana_reporting": false 00:20:02.428 }, 00:20:02.428 "vs": { 00:20:02.428 "nvme_version": "1.4" 00:20:02.428 }, 00:20:02.428 "ns_data": { 00:20:02.428 "id": 1, 00:20:02.428 "can_share": false 00:20:02.428 } 00:20:02.428 } 00:20:02.428 ], 00:20:02.428 "mp_policy": "active_passive" 00:20:02.428 } 00:20:02.428 } 00:20:02.428 ]' 00:20:02.428 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:02.428 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:02.428 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:02.687 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:02.687 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:02.687 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:20:02.687 08:00:04 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:02.687 08:00:04 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:02.687 08:00:04 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:02.687 08:00:04 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:02.687 08:00:04 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:02.945 08:00:04 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=0a866b3d-46a8-4eb4-9ff3-8f9746b1ad63 00:20:02.945 08:00:04 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:02.945 08:00:04 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a866b3d-46a8-4eb4-9ff3-8f9746b1ad63 00:20:03.208 08:00:05 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:03.468 08:00:05 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=4feb386d-6833-4f0a-b42e-129da1e3bb25 00:20:03.468 08:00:05 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4feb386d-6833-4f0a-b42e-129da1e3bb25 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:04.041 08:00:05 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.041 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.041 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:04.041 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:04.041 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:04.041 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.305 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:04.305 { 00:20:04.305 "name": "e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8", 00:20:04.305 "aliases": [ 00:20:04.305 "lvs/nvme0n1p0" 00:20:04.305 ], 00:20:04.305 "product_name": "Logical Volume", 00:20:04.305 "block_size": 4096, 00:20:04.305 "num_blocks": 26476544, 00:20:04.305 "uuid": "e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8", 00:20:04.305 "assigned_rate_limits": { 00:20:04.305 "rw_ios_per_sec": 0, 00:20:04.305 "rw_mbytes_per_sec": 0, 00:20:04.305 "r_mbytes_per_sec": 0, 00:20:04.305 "w_mbytes_per_sec": 0 00:20:04.305 }, 00:20:04.305 "claimed": false, 00:20:04.305 "zoned": false, 00:20:04.305 "supported_io_types": { 00:20:04.305 "read": true, 00:20:04.305 "write": true, 00:20:04.305 "unmap": true, 00:20:04.305 "flush": false, 00:20:04.305 "reset": true, 00:20:04.305 "nvme_admin": false, 00:20:04.305 "nvme_io": false, 00:20:04.305 "nvme_io_md": false, 00:20:04.305 "write_zeroes": true, 00:20:04.305 "zcopy": false, 00:20:04.305 "get_zone_info": false, 00:20:04.305 "zone_management": false, 00:20:04.305 "zone_append": false, 00:20:04.305 "compare": false, 00:20:04.305 "compare_and_write": false, 00:20:04.305 "abort": false, 00:20:04.305 "seek_hole": true, 00:20:04.305 "seek_data": true, 00:20:04.305 "copy": false, 00:20:04.305 "nvme_iov_md": false 00:20:04.305 }, 00:20:04.305 "driver_specific": { 00:20:04.305 "lvol": { 00:20:04.305 "lvol_store_uuid": "4feb386d-6833-4f0a-b42e-129da1e3bb25", 00:20:04.305 "base_bdev": "nvme0n1", 00:20:04.305 "thin_provision": true, 00:20:04.305 "num_allocated_clusters": 0, 00:20:04.305 "snapshot": false, 00:20:04.305 "clone": false, 00:20:04.305 "esnap_clone": false 00:20:04.305 } 00:20:04.305 } 00:20:04.305 } 00:20:04.305 ]' 00:20:04.305 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:04.305 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:04.305 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:04.305 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:04.305 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:04.305 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:04.305 08:00:06 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:04.305 08:00:06 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:04.305 08:00:06 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:04.895 08:00:06 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:04.895 08:00:06 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:04.895 08:00:06 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.895 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:04.895 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:04.895 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:04.895 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:04.895 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:05.154 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:05.154 { 00:20:05.154 "name": "e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8", 00:20:05.154 "aliases": [ 00:20:05.154 "lvs/nvme0n1p0" 00:20:05.154 ], 00:20:05.154 "product_name": "Logical Volume", 00:20:05.154 "block_size": 4096, 00:20:05.154 "num_blocks": 26476544, 00:20:05.154 "uuid": "e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8", 00:20:05.154 "assigned_rate_limits": { 00:20:05.154 "rw_ios_per_sec": 0, 00:20:05.154 "rw_mbytes_per_sec": 0, 00:20:05.154 "r_mbytes_per_sec": 0, 00:20:05.154 "w_mbytes_per_sec": 0 00:20:05.154 }, 00:20:05.154 "claimed": false, 00:20:05.154 "zoned": false, 00:20:05.154 "supported_io_types": { 00:20:05.154 "read": true, 00:20:05.154 "write": true, 00:20:05.154 "unmap": true, 00:20:05.154 "flush": false, 00:20:05.154 "reset": true, 00:20:05.154 "nvme_admin": false, 00:20:05.154 "nvme_io": false, 00:20:05.154 "nvme_io_md": false, 00:20:05.154 "write_zeroes": true, 00:20:05.154 "zcopy": false, 00:20:05.154 "get_zone_info": false, 00:20:05.154 "zone_management": false, 00:20:05.154 "zone_append": false, 00:20:05.154 "compare": false, 00:20:05.154 "compare_and_write": false, 00:20:05.154 "abort": false, 00:20:05.154 "seek_hole": true, 00:20:05.154 "seek_data": true, 00:20:05.154 "copy": false, 00:20:05.154 "nvme_iov_md": false 00:20:05.154 }, 00:20:05.154 "driver_specific": { 00:20:05.154 "lvol": { 00:20:05.154 "lvol_store_uuid": "4feb386d-6833-4f0a-b42e-129da1e3bb25", 00:20:05.154 "base_bdev": "nvme0n1", 00:20:05.154 "thin_provision": true, 00:20:05.154 "num_allocated_clusters": 0, 00:20:05.154 "snapshot": false, 00:20:05.154 "clone": false, 00:20:05.154 "esnap_clone": false 00:20:05.154 } 00:20:05.154 } 00:20:05.154 } 00:20:05.154 ]' 00:20:05.154 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:05.154 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:05.154 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:05.154 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:05.154 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:05.154 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:05.154 08:00:07 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:05.154 08:00:07 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:05.413 08:00:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:05.413 08:00:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:05.413 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:05.413 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:05.413 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:05.413 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:05.413 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 00:20:05.978 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:05.979 { 00:20:05.979 "name": "e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8", 00:20:05.979 "aliases": [ 00:20:05.979 "lvs/nvme0n1p0" 00:20:05.979 ], 00:20:05.979 "product_name": "Logical Volume", 00:20:05.979 "block_size": 4096, 00:20:05.979 "num_blocks": 26476544, 00:20:05.979 "uuid": "e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8", 00:20:05.979 "assigned_rate_limits": { 00:20:05.979 "rw_ios_per_sec": 0, 00:20:05.979 "rw_mbytes_per_sec": 0, 00:20:05.979 "r_mbytes_per_sec": 0, 00:20:05.979 "w_mbytes_per_sec": 0 00:20:05.979 }, 00:20:05.979 "claimed": false, 00:20:05.979 "zoned": false, 00:20:05.979 "supported_io_types": { 00:20:05.979 "read": true, 00:20:05.979 "write": true, 00:20:05.979 "unmap": true, 00:20:05.979 "flush": false, 00:20:05.979 "reset": true, 00:20:05.979 "nvme_admin": false, 00:20:05.979 "nvme_io": false, 00:20:05.979 "nvme_io_md": false, 00:20:05.979 "write_zeroes": true, 00:20:05.979 "zcopy": false, 00:20:05.979 "get_zone_info": false, 00:20:05.979 "zone_management": false, 00:20:05.979 "zone_append": false, 00:20:05.979 "compare": false, 00:20:05.979 "compare_and_write": false, 00:20:05.979 "abort": false, 00:20:05.979 "seek_hole": true, 00:20:05.979 "seek_data": true, 00:20:05.979 "copy": false, 00:20:05.979 "nvme_iov_md": false 00:20:05.979 }, 00:20:05.979 "driver_specific": { 00:20:05.979 "lvol": { 00:20:05.979 "lvol_store_uuid": "4feb386d-6833-4f0a-b42e-129da1e3bb25", 00:20:05.979 "base_bdev": "nvme0n1", 00:20:05.979 "thin_provision": true, 00:20:05.979 "num_allocated_clusters": 0, 00:20:05.979 "snapshot": false, 00:20:05.979 "clone": false, 00:20:05.979 "esnap_clone": false 00:20:05.979 } 00:20:05.979 } 00:20:05.979 } 00:20:05.979 ]' 00:20:05.979 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:05.979 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:05.979 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:05.979 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:05.979 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:05.979 08:00:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:05.979 08:00:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:05.979 08:00:07 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 --l2p_dram_limit 10' 00:20:05.979 08:00:07 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:05.979 08:00:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:05.979 08:00:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:05.979 08:00:07 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:05.979 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:05.979 08:00:07 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e99fdbd0-9aa8-4f4f-9f42-723b1ccd4eb8 --l2p_dram_limit 10 -c nvc0n1p0 00:20:06.238 [2024-10-09 08:00:08.152222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.152317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.238 [2024-10-09 08:00:08.152376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:06.238 [2024-10-09 08:00:08.152413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.152582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.152623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.238 [2024-10-09 08:00:08.152651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:06.238 [2024-10-09 08:00:08.152671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.152761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.238 [2024-10-09 08:00:08.154281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.238 [2024-10-09 08:00:08.154375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.154413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.238 [2024-10-09 08:00:08.154440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:20:06.238 [2024-10-09 08:00:08.154465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.154731] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2b8840b3-27ea-4ae9-a311-de7b8f0c5f0b 00:20:06.238 [2024-10-09 08:00:08.156247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.156299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:06.238 [2024-10-09 08:00:08.156319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:06.238 [2024-10-09 08:00:08.156350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.162048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.162144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.238 [2024-10-09 08:00:08.162174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.512 ms 00:20:06.238 [2024-10-09 08:00:08.162204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.162445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.162483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.238 [2024-10-09 08:00:08.162507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:20:06.238 [2024-10-09 08:00:08.162553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.162648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.162681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.238 [2024-10-09 08:00:08.162697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:06.238 [2024-10-09 08:00:08.162724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.162775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.238 [2024-10-09 08:00:08.168694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.168741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.238 [2024-10-09 08:00:08.168769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.933 ms 00:20:06.238 [2024-10-09 08:00:08.168783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.168856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.168879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.238 [2024-10-09 08:00:08.168901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:06.238 [2024-10-09 08:00:08.168919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.169011] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:06.238 [2024-10-09 08:00:08.169285] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.238 [2024-10-09 08:00:08.169351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.238 [2024-10-09 08:00:08.169385] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:06.238 [2024-10-09 08:00:08.169423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.238 [2024-10-09 08:00:08.169446] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.238 [2024-10-09 08:00:08.169471] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:06.238 [2024-10-09 08:00:08.169491] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.238 [2024-10-09 08:00:08.169514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.238 [2024-10-09 08:00:08.169533] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.238 [2024-10-09 08:00:08.169573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.169612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.238 [2024-10-09 08:00:08.169640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:20:06.238 [2024-10-09 08:00:08.169660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.169810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.238 [2024-10-09 08:00:08.169840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.238 [2024-10-09 08:00:08.169857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:06.238 [2024-10-09 08:00:08.169868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.238 [2024-10-09 08:00:08.169993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.238 [2024-10-09 08:00:08.170013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.238 [2024-10-09 08:00:08.170029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.238 [2024-10-09 08:00:08.170042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.238 [2024-10-09 08:00:08.170068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:06.238 [2024-10-09 08:00:08.170093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.238 [2024-10-09 08:00:08.170112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.238 [2024-10-09 08:00:08.170137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.238 [2024-10-09 08:00:08.170152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:06.238 [2024-10-09 08:00:08.170176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.238 [2024-10-09 08:00:08.170200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.238 [2024-10-09 08:00:08.170225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:06.238 [2024-10-09 08:00:08.170245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.238 [2024-10-09 08:00:08.170290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:06.238 [2024-10-09 08:00:08.170306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.238 [2024-10-09 08:00:08.170375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.238 [2024-10-09 08:00:08.170422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.238 [2024-10-09 08:00:08.170444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.238 [2024-10-09 08:00:08.170485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.238 [2024-10-09 08:00:08.170508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.238 [2024-10-09 08:00:08.170550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.238 [2024-10-09 08:00:08.170570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.238 [2024-10-09 08:00:08.170610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.238 [2024-10-09 08:00:08.170636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:06.238 [2024-10-09 08:00:08.170665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.238 [2024-10-09 08:00:08.170690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.238 [2024-10-09 08:00:08.170704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:06.238 [2024-10-09 08:00:08.170717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.239 [2024-10-09 08:00:08.170728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.239 [2024-10-09 08:00:08.170741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:06.239 [2024-10-09 08:00:08.170752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.239 [2024-10-09 08:00:08.170765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.239 [2024-10-09 08:00:08.170777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:06.239 [2024-10-09 08:00:08.170798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.239 [2024-10-09 08:00:08.170817] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.239 [2024-10-09 08:00:08.170855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.239 [2024-10-09 08:00:08.170880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.239 [2024-10-09 08:00:08.170909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.239 [2024-10-09 08:00:08.170933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.239 [2024-10-09 08:00:08.170982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.239 [2024-10-09 08:00:08.171003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.239 [2024-10-09 08:00:08.171033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.239 [2024-10-09 08:00:08.171053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.239 [2024-10-09 08:00:08.171081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.239 [2024-10-09 08:00:08.171108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.239 [2024-10-09 08:00:08.171135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.239 [2024-10-09 08:00:08.171158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:06.239 [2024-10-09 08:00:08.171181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:06.239 [2024-10-09 08:00:08.171201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:06.239 [2024-10-09 08:00:08.171224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:06.239 [2024-10-09 08:00:08.171247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:06.239 [2024-10-09 08:00:08.171270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:06.239 [2024-10-09 08:00:08.171290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:06.239 [2024-10-09 08:00:08.171312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:06.239 [2024-10-09 08:00:08.171368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:06.239 [2024-10-09 08:00:08.171399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:06.239 [2024-10-09 08:00:08.171420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:06.239 [2024-10-09 08:00:08.171444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:06.239 [2024-10-09 08:00:08.171464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:06.239 [2024-10-09 08:00:08.171487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:06.239 [2024-10-09 08:00:08.171508] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.239 [2024-10-09 08:00:08.171550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.239 [2024-10-09 08:00:08.171587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.239 [2024-10-09 08:00:08.171629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.239 [2024-10-09 08:00:08.171663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.239 [2024-10-09 08:00:08.171690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.239 [2024-10-09 08:00:08.171713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.239 [2024-10-09 08:00:08.171734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.239 [2024-10-09 08:00:08.171748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:20:06.239 [2024-10-09 08:00:08.171762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.239 [2024-10-09 08:00:08.171836] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:06.239 [2024-10-09 08:00:08.172067] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:09.517 [2024-10-09 08:00:11.453057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.517 [2024-10-09 08:00:11.453340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:09.517 [2024-10-09 08:00:11.453498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3281.250 ms 00:20:09.517 [2024-10-09 08:00:11.453631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.517 [2024-10-09 08:00:11.486659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.517 [2024-10-09 08:00:11.486897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.517 [2024-10-09 08:00:11.487026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.685 ms 00:20:09.517 [2024-10-09 08:00:11.487090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.517 [2024-10-09 08:00:11.487513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.517 [2024-10-09 08:00:11.487595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:09.517 [2024-10-09 08:00:11.487851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:09.517 [2024-10-09 08:00:11.487922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.775 [2024-10-09 08:00:11.541896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.775 [2024-10-09 08:00:11.542233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.775 [2024-10-09 08:00:11.542505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.856 ms 00:20:09.775 [2024-10-09 08:00:11.542744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.775 [2024-10-09 08:00:11.543096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.775 [2024-10-09 08:00:11.543327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.775 [2024-10-09 08:00:11.543582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:09.775 [2024-10-09 08:00:11.543833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.775 [2024-10-09 08:00:11.544634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.775 [2024-10-09 08:00:11.544859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.775 [2024-10-09 08:00:11.545070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:20:09.775 [2024-10-09 08:00:11.545279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.775 [2024-10-09 08:00:11.545748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.775 [2024-10-09 08:00:11.545974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.775 [2024-10-09 08:00:11.546018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:09.775 [2024-10-09 08:00:11.546069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.775 [2024-10-09 08:00:11.568269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.775 [2024-10-09 08:00:11.568540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.775 [2024-10-09 08:00:11.568578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.141 ms 00:20:09.776 [2024-10-09 08:00:11.568609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.776 [2024-10-09 08:00:11.585293] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:09.776 [2024-10-09 08:00:11.588459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.776 [2024-10-09 08:00:11.588504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:09.776 [2024-10-09 08:00:11.588530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.667 ms 00:20:09.776 [2024-10-09 08:00:11.588560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.776 [2024-10-09 08:00:11.692492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.776 [2024-10-09 08:00:11.692619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:09.776 [2024-10-09 08:00:11.692671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.849 ms 00:20:09.776 [2024-10-09 08:00:11.692697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.776 [2024-10-09 08:00:11.693143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.776 [2024-10-09 08:00:11.693191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:09.776 [2024-10-09 08:00:11.693237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:20:09.776 [2024-10-09 08:00:11.693262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.776 [2024-10-09 08:00:11.747108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.776 [2024-10-09 08:00:11.747226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:09.776 [2024-10-09 08:00:11.747269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.636 ms 00:20:09.776 [2024-10-09 08:00:11.747290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.034 [2024-10-09 08:00:11.799762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.034 [2024-10-09 08:00:11.800122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:10.034 [2024-10-09 08:00:11.800171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.345 ms 00:20:10.034 [2024-10-09 08:00:11.800188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.034 [2024-10-09 08:00:11.801409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.034 [2024-10-09 08:00:11.801460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:10.034 [2024-10-09 08:00:11.801484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:20:10.034 [2024-10-09 08:00:11.801498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.034 [2024-10-09 08:00:11.930759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.034 [2024-10-09 08:00:11.930843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:10.034 [2024-10-09 08:00:11.930883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 129.110 ms 00:20:10.034 [2024-10-09 08:00:11.930906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.034 [2024-10-09 08:00:11.975805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.034 [2024-10-09 08:00:11.975881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:10.034 [2024-10-09 08:00:11.975909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.735 ms 00:20:10.034 [2024-10-09 08:00:11.975925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.034 [2024-10-09 08:00:12.016123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.034 [2024-10-09 08:00:12.016203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:10.034 [2024-10-09 08:00:12.016232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.129 ms 00:20:10.034 [2024-10-09 08:00:12.016247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.293 [2024-10-09 08:00:12.054208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.293 [2024-10-09 08:00:12.054439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:10.293 [2024-10-09 08:00:12.054478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.904 ms 00:20:10.293 [2024-10-09 08:00:12.054493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.293 [2024-10-09 08:00:12.054544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.293 [2024-10-09 08:00:12.054562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:10.293 [2024-10-09 08:00:12.054592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:10.293 [2024-10-09 08:00:12.054625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.293 [2024-10-09 08:00:12.054833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.293 [2024-10-09 08:00:12.054865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:10.293 [2024-10-09 08:00:12.054885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:10.293 [2024-10-09 08:00:12.054899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.293 [2024-10-09 08:00:12.056125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3903.455 ms, result 0 00:20:10.293 { 00:20:10.293 "name": "ftl0", 00:20:10.293 "uuid": "2b8840b3-27ea-4ae9-a311-de7b8f0c5f0b" 00:20:10.293 } 00:20:10.293 08:00:12 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:10.293 08:00:12 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:10.552 08:00:12 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:10.552 08:00:12 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:10.810 [2024-10-09 08:00:12.639674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.639813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:10.810 [2024-10-09 08:00:12.639878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:10.810 [2024-10-09 08:00:12.639924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.639999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.810 [2024-10-09 08:00:12.643657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.643814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:10.810 [2024-10-09 08:00:12.643966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:20:10.810 [2024-10-09 08:00:12.643998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.644379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.644420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:10.810 [2024-10-09 08:00:12.644439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:10.810 [2024-10-09 08:00:12.644451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.647785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.647828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:10.810 [2024-10-09 08:00:12.647847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:20:10.810 [2024-10-09 08:00:12.647862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.655024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.655128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:10.810 [2024-10-09 08:00:12.655153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.108 ms 00:20:10.810 [2024-10-09 08:00:12.655166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.687220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.687285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:10.810 [2024-10-09 08:00:12.687318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.898 ms 00:20:10.810 [2024-10-09 08:00:12.687357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.708139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.708198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:10.810 [2024-10-09 08:00:12.708222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.708 ms 00:20:10.810 [2024-10-09 08:00:12.708235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.708504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.708543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:10.810 [2024-10-09 08:00:12.708572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:10.810 [2024-10-09 08:00:12.708607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.740356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.740419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:10.810 [2024-10-09 08:00:12.740442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.682 ms 00:20:10.810 [2024-10-09 08:00:12.740455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.771766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.771823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:10.810 [2024-10-09 08:00:12.771850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.240 ms 00:20:10.810 [2024-10-09 08:00:12.771864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.810 [2024-10-09 08:00:12.802666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.810 [2024-10-09 08:00:12.802724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:10.810 [2024-10-09 08:00:12.802746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.736 ms 00:20:10.810 [2024-10-09 08:00:12.802758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.068 [2024-10-09 08:00:12.834469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.068 [2024-10-09 08:00:12.834534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.068 [2024-10-09 08:00:12.834557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.569 ms 00:20:11.068 [2024-10-09 08:00:12.834570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.068 [2024-10-09 08:00:12.834629] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.068 [2024-10-09 08:00:12.834656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.834989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.835997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.068 [2024-10-09 08:00:12.836127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.069 [2024-10-09 08:00:12.836143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.069 [2024-10-09 08:00:12.836165] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.069 [2024-10-09 08:00:12.836179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b8840b3-27ea-4ae9-a311-de7b8f0c5f0b 00:20:11.069 [2024-10-09 08:00:12.836200] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:11.069 [2024-10-09 08:00:12.836221] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:11.069 [2024-10-09 08:00:12.836233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:11.069 [2024-10-09 08:00:12.836250] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:11.069 [2024-10-09 08:00:12.836262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.069 [2024-10-09 08:00:12.836279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.069 [2024-10-09 08:00:12.836300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.069 [2024-10-09 08:00:12.836316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.069 [2024-10-09 08:00:12.836327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.069 [2024-10-09 08:00:12.836361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.069 [2024-10-09 08:00:12.836377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.069 [2024-10-09 08:00:12.836393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:20:11.069 [2024-10-09 08:00:12.836415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.069 [2024-10-09 08:00:12.853150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.069 [2024-10-09 08:00:12.853205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.069 [2024-10-09 08:00:12.853227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.659 ms 00:20:11.069 [2024-10-09 08:00:12.853240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.069 [2024-10-09 08:00:12.853740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.069 [2024-10-09 08:00:12.853760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.069 [2024-10-09 08:00:12.853783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:20:11.069 [2024-10-09 08:00:12.853795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.069 [2024-10-09 08:00:12.903117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.069 [2024-10-09 08:00:12.903187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.069 [2024-10-09 08:00:12.903217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.069 [2024-10-09 08:00:12.903229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.069 [2024-10-09 08:00:12.903324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.069 [2024-10-09 08:00:12.903364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.069 [2024-10-09 08:00:12.903381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.069 [2024-10-09 08:00:12.903406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.069 [2024-10-09 08:00:12.903566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.069 [2024-10-09 08:00:12.903587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.069 [2024-10-09 08:00:12.903603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.069 [2024-10-09 08:00:12.903635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.069 [2024-10-09 08:00:12.903671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.069 [2024-10-09 08:00:12.903695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.069 [2024-10-09 08:00:12.903710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.069 [2024-10-09 08:00:12.903722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.069 [2024-10-09 08:00:13.009763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.069 [2024-10-09 08:00:13.009821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.069 [2024-10-09 08:00:13.009843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.069 [2024-10-09 08:00:13.009856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.094563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.328 [2024-10-09 08:00:13.094633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.328 [2024-10-09 08:00:13.094663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.328 [2024-10-09 08:00:13.094676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.094816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.328 [2024-10-09 08:00:13.094844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.328 [2024-10-09 08:00:13.094860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.328 [2024-10-09 08:00:13.094877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.094959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.328 [2024-10-09 08:00:13.094981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.328 [2024-10-09 08:00:13.094997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.328 [2024-10-09 08:00:13.095009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.095160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.328 [2024-10-09 08:00:13.095181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.328 [2024-10-09 08:00:13.095197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.328 [2024-10-09 08:00:13.095208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.095265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.328 [2024-10-09 08:00:13.095285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.328 [2024-10-09 08:00:13.095303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.328 [2024-10-09 08:00:13.095314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.095398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.328 [2024-10-09 08:00:13.095418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.328 [2024-10-09 08:00:13.095433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.328 [2024-10-09 08:00:13.095445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.095507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.328 [2024-10-09 08:00:13.095527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.328 [2024-10-09 08:00:13.095551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.328 [2024-10-09 08:00:13.095563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.328 [2024-10-09 08:00:13.095737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.029 ms, result 0 00:20:11.328 true 00:20:11.328 08:00:13 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76877 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76877 ']' 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76877 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76877 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76877' 00:20:11.328 killing process with pid 76877 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76877 00:20:11.328 08:00:13 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76877 00:20:16.604 08:00:17 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:21.869 262144+0 records in 00:20:21.869 262144+0 records out 00:20:21.869 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.1582 s, 208 MB/s 00:20:21.869 08:00:23 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:23.813 08:00:25 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:23.813 [2024-10-09 08:00:25.424931] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:20:23.813 [2024-10-09 08:00:25.425138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77153 ] 00:20:23.813 [2024-10-09 08:00:25.607058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.813 [2024-10-09 08:00:25.803836] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.380 [2024-10-09 08:00:26.134127] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.380 [2024-10-09 08:00:26.134208] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.380 [2024-10-09 08:00:26.301509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.301582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.380 [2024-10-09 08:00:26.301604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:24.380 [2024-10-09 08:00:26.301617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.301716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.301746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.380 [2024-10-09 08:00:26.301763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:24.380 [2024-10-09 08:00:26.301774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.301810] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:24.380 [2024-10-09 08:00:26.302757] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:24.380 [2024-10-09 08:00:26.302792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.302806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.380 [2024-10-09 08:00:26.302819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:20:24.380 [2024-10-09 08:00:26.302830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.304062] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:24.380 [2024-10-09 08:00:26.321196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.321251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:24.380 [2024-10-09 08:00:26.321270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.137 ms 00:20:24.380 [2024-10-09 08:00:26.321282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.321383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.321405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:24.380 [2024-10-09 08:00:26.321418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:24.380 [2024-10-09 08:00:26.321429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.325902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.325963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.380 [2024-10-09 08:00:26.325981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:20:24.380 [2024-10-09 08:00:26.325993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.326124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.326146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.380 [2024-10-09 08:00:26.326160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:24.380 [2024-10-09 08:00:26.326171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.326245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.326264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:24.380 [2024-10-09 08:00:26.326277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:24.380 [2024-10-09 08:00:26.326288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.326323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.380 [2024-10-09 08:00:26.330695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.330757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.380 [2024-10-09 08:00:26.330775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.381 ms 00:20:24.380 [2024-10-09 08:00:26.330787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.330828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.380 [2024-10-09 08:00:26.330844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:24.380 [2024-10-09 08:00:26.330858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:24.380 [2024-10-09 08:00:26.330869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.380 [2024-10-09 08:00:26.330937] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:24.380 [2024-10-09 08:00:26.330975] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:24.380 [2024-10-09 08:00:26.331020] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:24.380 [2024-10-09 08:00:26.331041] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:24.381 [2024-10-09 08:00:26.331159] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:24.381 [2024-10-09 08:00:26.331175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:24.381 [2024-10-09 08:00:26.331190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:24.381 [2024-10-09 08:00:26.331216] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:24.381 [2024-10-09 08:00:26.331230] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:24.381 [2024-10-09 08:00:26.331242] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:24.381 [2024-10-09 08:00:26.331253] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:24.381 [2024-10-09 08:00:26.331264] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:24.381 [2024-10-09 08:00:26.331275] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:24.381 [2024-10-09 08:00:26.331287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.381 [2024-10-09 08:00:26.331298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:24.381 [2024-10-09 08:00:26.331310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:20:24.381 [2024-10-09 08:00:26.331320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.381 [2024-10-09 08:00:26.331456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.381 [2024-10-09 08:00:26.331487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:24.381 [2024-10-09 08:00:26.331500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:24.381 [2024-10-09 08:00:26.331511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.381 [2024-10-09 08:00:26.331675] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:24.381 [2024-10-09 08:00:26.331709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:24.381 [2024-10-09 08:00:26.331728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.381 [2024-10-09 08:00:26.331740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.381 [2024-10-09 08:00:26.331752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:24.381 [2024-10-09 08:00:26.331762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:24.381 [2024-10-09 08:00:26.331773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:24.381 [2024-10-09 08:00:26.331783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:24.381 [2024-10-09 08:00:26.331794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:24.381 [2024-10-09 08:00:26.331804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.381 [2024-10-09 08:00:26.331814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:24.381 [2024-10-09 08:00:26.331825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:24.381 [2024-10-09 08:00:26.331835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.381 [2024-10-09 08:00:26.331868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:24.381 [2024-10-09 08:00:26.331881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:24.381 [2024-10-09 08:00:26.331892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.381 [2024-10-09 08:00:26.331903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:24.381 [2024-10-09 08:00:26.331913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:24.381 [2024-10-09 08:00:26.331923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.381 [2024-10-09 08:00:26.331934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:24.381 [2024-10-09 08:00:26.331944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:24.381 [2024-10-09 08:00:26.331954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.381 [2024-10-09 08:00:26.331965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:24.381 [2024-10-09 08:00:26.331975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:24.381 [2024-10-09 08:00:26.331985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.381 [2024-10-09 08:00:26.331995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:24.381 [2024-10-09 08:00:26.332005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:24.381 [2024-10-09 08:00:26.332015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.381 [2024-10-09 08:00:26.332025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:24.381 [2024-10-09 08:00:26.332036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:24.381 [2024-10-09 08:00:26.332045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.381 [2024-10-09 08:00:26.332056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:24.381 [2024-10-09 08:00:26.332066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:24.381 [2024-10-09 08:00:26.332076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.381 [2024-10-09 08:00:26.332086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:24.381 [2024-10-09 08:00:26.332096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:24.381 [2024-10-09 08:00:26.332106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.381 [2024-10-09 08:00:26.332117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:24.381 [2024-10-09 08:00:26.332127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:24.381 [2024-10-09 08:00:26.332137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.381 [2024-10-09 08:00:26.332148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:24.381 [2024-10-09 08:00:26.332158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:24.381 [2024-10-09 08:00:26.332175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.381 [2024-10-09 08:00:26.332185] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:24.381 [2024-10-09 08:00:26.332196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:24.381 [2024-10-09 08:00:26.332219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.381 [2024-10-09 08:00:26.332231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.381 [2024-10-09 08:00:26.332243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:24.381 [2024-10-09 08:00:26.332253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:24.381 [2024-10-09 08:00:26.332264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:24.381 [2024-10-09 08:00:26.332274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:24.381 [2024-10-09 08:00:26.332285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:24.381 [2024-10-09 08:00:26.332295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:24.381 [2024-10-09 08:00:26.332307] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:24.381 [2024-10-09 08:00:26.332321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.381 [2024-10-09 08:00:26.332352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:24.381 [2024-10-09 08:00:26.332365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:24.381 [2024-10-09 08:00:26.332376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:24.381 [2024-10-09 08:00:26.332388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:24.381 [2024-10-09 08:00:26.332399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:24.381 [2024-10-09 08:00:26.332410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:24.381 [2024-10-09 08:00:26.332422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:24.381 [2024-10-09 08:00:26.332433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:24.381 [2024-10-09 08:00:26.332444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:24.381 [2024-10-09 08:00:26.332455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:24.381 [2024-10-09 08:00:26.332467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:24.381 [2024-10-09 08:00:26.332479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:24.381 [2024-10-09 08:00:26.332490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:24.381 [2024-10-09 08:00:26.332502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:24.381 [2024-10-09 08:00:26.332513] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:24.381 [2024-10-09 08:00:26.332525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.381 [2024-10-09 08:00:26.332538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:24.381 [2024-10-09 08:00:26.332549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:24.381 [2024-10-09 08:00:26.332560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:24.381 [2024-10-09 08:00:26.332571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:24.381 [2024-10-09 08:00:26.332584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.381 [2024-10-09 08:00:26.332595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:24.381 [2024-10-09 08:00:26.332607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:20:24.381 [2024-10-09 08:00:26.332618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.381 [2024-10-09 08:00:26.373640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.381 [2024-10-09 08:00:26.373715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.381 [2024-10-09 08:00:26.373740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.938 ms 00:20:24.381 [2024-10-09 08:00:26.373753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.381 [2024-10-09 08:00:26.373884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.381 [2024-10-09 08:00:26.373901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:24.381 [2024-10-09 08:00:26.373914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:24.381 [2024-10-09 08:00:26.373925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.640 [2024-10-09 08:00:26.422314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.640 [2024-10-09 08:00:26.422433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.640 [2024-10-09 08:00:26.422484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.286 ms 00:20:24.640 [2024-10-09 08:00:26.422505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.640 [2024-10-09 08:00:26.422615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.640 [2024-10-09 08:00:26.422640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.640 [2024-10-09 08:00:26.422661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:24.640 [2024-10-09 08:00:26.422678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.640 [2024-10-09 08:00:26.423171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.640 [2024-10-09 08:00:26.423209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.640 [2024-10-09 08:00:26.423254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:20:24.640 [2024-10-09 08:00:26.423298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.640 [2024-10-09 08:00:26.423560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.640 [2024-10-09 08:00:26.423678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.640 [2024-10-09 08:00:26.423709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:20:24.640 [2024-10-09 08:00:26.423728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.640 [2024-10-09 08:00:26.449299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.640 [2024-10-09 08:00:26.449397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.640 [2024-10-09 08:00:26.449430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.523 ms 00:20:24.640 [2024-10-09 08:00:26.449452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.640 [2024-10-09 08:00:26.474434] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:24.640 [2024-10-09 08:00:26.474509] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:24.640 [2024-10-09 08:00:26.474541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.640 [2024-10-09 08:00:26.474561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:24.640 [2024-10-09 08:00:26.474584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.830 ms 00:20:24.640 [2024-10-09 08:00:26.474602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.640 [2024-10-09 08:00:26.510194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.640 [2024-10-09 08:00:26.510428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:24.641 [2024-10-09 08:00:26.510461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.514 ms 00:20:24.641 [2024-10-09 08:00:26.510474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.526556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.526604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:24.641 [2024-10-09 08:00:26.526623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.980 ms 00:20:24.641 [2024-10-09 08:00:26.526635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.542637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.542709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:24.641 [2024-10-09 08:00:26.542729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.949 ms 00:20:24.641 [2024-10-09 08:00:26.542741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.543650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.543690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:24.641 [2024-10-09 08:00:26.543707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:20:24.641 [2024-10-09 08:00:26.543719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.621984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.622212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:24.641 [2024-10-09 08:00:26.622244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.227 ms 00:20:24.641 [2024-10-09 08:00:26.622258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.635287] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:24.641 [2024-10-09 08:00:26.637963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.638002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:24.641 [2024-10-09 08:00:26.638021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.608 ms 00:20:24.641 [2024-10-09 08:00:26.638034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.638174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.638196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:24.641 [2024-10-09 08:00:26.638210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:24.641 [2024-10-09 08:00:26.638221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.638359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.638381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:24.641 [2024-10-09 08:00:26.638394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:24.641 [2024-10-09 08:00:26.638405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.638440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.638461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:24.641 [2024-10-09 08:00:26.638473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:24.641 [2024-10-09 08:00:26.638484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.641 [2024-10-09 08:00:26.638527] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:24.641 [2024-10-09 08:00:26.638559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.641 [2024-10-09 08:00:26.638571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:24.641 [2024-10-09 08:00:26.638583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:24.641 [2024-10-09 08:00:26.638595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.900 [2024-10-09 08:00:26.671851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.900 [2024-10-09 08:00:26.671936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:24.900 [2024-10-09 08:00:26.671959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.221 ms 00:20:24.900 [2024-10-09 08:00:26.671971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.900 [2024-10-09 08:00:26.672081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.900 [2024-10-09 08:00:26.672102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:24.900 [2024-10-09 08:00:26.672115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:24.900 [2024-10-09 08:00:26.672127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.900 [2024-10-09 08:00:26.673474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.384 ms, result 0 00:20:25.835  [2024-10-09T08:00:28.790Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-09T08:00:29.729Z] Copying: 54/1024 [MB] (26 MBps) [2024-10-09T08:00:31.102Z] Copying: 82/1024 [MB] (27 MBps) [2024-10-09T08:00:32.038Z] Copying: 109/1024 [MB] (27 MBps) [2024-10-09T08:00:32.974Z] Copying: 137/1024 [MB] (28 MBps) [2024-10-09T08:00:33.910Z] Copying: 165/1024 [MB] (27 MBps) [2024-10-09T08:00:34.846Z] Copying: 193/1024 [MB] (27 MBps) [2024-10-09T08:00:35.781Z] Copying: 217/1024 [MB] (24 MBps) [2024-10-09T08:00:36.718Z] Copying: 245/1024 [MB] (28 MBps) [2024-10-09T08:00:38.092Z] Copying: 274/1024 [MB] (28 MBps) [2024-10-09T08:00:39.038Z] Copying: 301/1024 [MB] (27 MBps) [2024-10-09T08:00:39.971Z] Copying: 329/1024 [MB] (27 MBps) [2024-10-09T08:00:40.907Z] Copying: 356/1024 [MB] (27 MBps) [2024-10-09T08:00:41.842Z] Copying: 384/1024 [MB] (27 MBps) [2024-10-09T08:00:42.776Z] Copying: 411/1024 [MB] (27 MBps) [2024-10-09T08:00:43.711Z] Copying: 436/1024 [MB] (25 MBps) [2024-10-09T08:00:45.115Z] Copying: 464/1024 [MB] (27 MBps) [2024-10-09T08:00:46.050Z] Copying: 493/1024 [MB] (28 MBps) [2024-10-09T08:00:46.987Z] Copying: 520/1024 [MB] (27 MBps) [2024-10-09T08:00:47.922Z] Copying: 549/1024 [MB] (28 MBps) [2024-10-09T08:00:48.857Z] Copying: 576/1024 [MB] (26 MBps) [2024-10-09T08:00:49.792Z] Copying: 602/1024 [MB] (26 MBps) [2024-10-09T08:00:50.728Z] Copying: 630/1024 [MB] (27 MBps) [2024-10-09T08:00:52.105Z] Copying: 658/1024 [MB] (27 MBps) [2024-10-09T08:00:53.041Z] Copying: 684/1024 [MB] (26 MBps) [2024-10-09T08:00:53.976Z] Copying: 711/1024 [MB] (27 MBps) [2024-10-09T08:00:54.908Z] Copying: 739/1024 [MB] (27 MBps) [2024-10-09T08:00:55.843Z] Copying: 768/1024 [MB] (28 MBps) [2024-10-09T08:00:56.780Z] Copying: 792/1024 [MB] (24 MBps) [2024-10-09T08:00:57.736Z] Copying: 819/1024 [MB] (27 MBps) [2024-10-09T08:00:59.115Z] Copying: 847/1024 [MB] (27 MBps) [2024-10-09T08:01:00.049Z] Copying: 873/1024 [MB] (26 MBps) [2024-10-09T08:01:00.983Z] Copying: 900/1024 [MB] (26 MBps) [2024-10-09T08:01:01.917Z] Copying: 929/1024 [MB] (28 MBps) [2024-10-09T08:01:02.858Z] Copying: 957/1024 [MB] (28 MBps) [2024-10-09T08:01:03.792Z] Copying: 986/1024 [MB] (28 MBps) [2024-10-09T08:01:04.050Z] Copying: 1013/1024 [MB] (27 MBps) [2024-10-09T08:01:04.050Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-09 08:01:04.046572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.038 [2024-10-09 08:01:04.046637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:02.038 [2024-10-09 08:01:04.046659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.038 [2024-10-09 08:01:04.046671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.038 [2024-10-09 08:01:04.046716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:02.297 [2024-10-09 08:01:04.050185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.297 [2024-10-09 08:01:04.050227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:02.297 [2024-10-09 08:01:04.050245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.445 ms 00:21:02.297 [2024-10-09 08:01:04.050257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.297 [2024-10-09 08:01:04.051781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.297 [2024-10-09 08:01:04.051832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:02.297 [2024-10-09 08:01:04.051859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:21:02.297 [2024-10-09 08:01:04.051878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.297 [2024-10-09 08:01:04.067793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.297 [2024-10-09 08:01:04.067860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:02.297 [2024-10-09 08:01:04.067878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.883 ms 00:21:02.297 [2024-10-09 08:01:04.067890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.297 [2024-10-09 08:01:04.074598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.297 [2024-10-09 08:01:04.074634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:02.297 [2024-10-09 08:01:04.074650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:21:02.297 [2024-10-09 08:01:04.074661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.297 [2024-10-09 08:01:04.106003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.297 [2024-10-09 08:01:04.106057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:02.297 [2024-10-09 08:01:04.106075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.259 ms 00:21:02.297 [2024-10-09 08:01:04.106087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.297 [2024-10-09 08:01:04.123784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.297 [2024-10-09 08:01:04.123970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:02.297 [2024-10-09 08:01:04.124017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.649 ms 00:21:02.297 [2024-10-09 08:01:04.124031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.297 [2024-10-09 08:01:04.124209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.297 [2024-10-09 08:01:04.124231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:02.297 [2024-10-09 08:01:04.124245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:02.297 [2024-10-09 08:01:04.124256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.298 [2024-10-09 08:01:04.155774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.298 [2024-10-09 08:01:04.155821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:02.298 [2024-10-09 08:01:04.155839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.498 ms 00:21:02.298 [2024-10-09 08:01:04.155851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.298 [2024-10-09 08:01:04.186869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.298 [2024-10-09 08:01:04.187042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:02.298 [2024-10-09 08:01:04.187072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.973 ms 00:21:02.298 [2024-10-09 08:01:04.187085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.298 [2024-10-09 08:01:04.218201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.298 [2024-10-09 08:01:04.218414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:02.298 [2024-10-09 08:01:04.218444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.066 ms 00:21:02.298 [2024-10-09 08:01:04.218458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.298 [2024-10-09 08:01:04.249105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.298 [2024-10-09 08:01:04.249276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:02.298 [2024-10-09 08:01:04.249305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.542 ms 00:21:02.298 [2024-10-09 08:01:04.249318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.298 [2024-10-09 08:01:04.249383] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:02.298 [2024-10-09 08:01:04.249410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.249997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:02.298 [2024-10-09 08:01:04.250101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:02.299 [2024-10-09 08:01:04.250606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:02.299 [2024-10-09 08:01:04.250618] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b8840b3-27ea-4ae9-a311-de7b8f0c5f0b 00:21:02.299 [2024-10-09 08:01:04.250629] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:02.299 [2024-10-09 08:01:04.250640] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:02.299 [2024-10-09 08:01:04.250651] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:02.299 [2024-10-09 08:01:04.250662] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:02.299 [2024-10-09 08:01:04.250672] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:02.299 [2024-10-09 08:01:04.250684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:02.299 [2024-10-09 08:01:04.250694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:02.299 [2024-10-09 08:01:04.250704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:02.299 [2024-10-09 08:01:04.250714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:02.299 [2024-10-09 08:01:04.250726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.299 [2024-10-09 08:01:04.250752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:02.299 [2024-10-09 08:01:04.250783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:21:02.299 [2024-10-09 08:01:04.250795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.299 [2024-10-09 08:01:04.267271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.299 [2024-10-09 08:01:04.267312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:02.299 [2024-10-09 08:01:04.267344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.429 ms 00:21:02.299 [2024-10-09 08:01:04.267359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.299 [2024-10-09 08:01:04.267824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.299 [2024-10-09 08:01:04.267855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:02.299 [2024-10-09 08:01:04.267870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:21:02.299 [2024-10-09 08:01:04.267881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.299 [2024-10-09 08:01:04.304968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.299 [2024-10-09 08:01:04.305024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.299 [2024-10-09 08:01:04.305042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.299 [2024-10-09 08:01:04.305054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.299 [2024-10-09 08:01:04.305142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.299 [2024-10-09 08:01:04.305157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.299 [2024-10-09 08:01:04.305169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.299 [2024-10-09 08:01:04.305182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.299 [2024-10-09 08:01:04.305285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.299 [2024-10-09 08:01:04.305306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.299 [2024-10-09 08:01:04.305320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.299 [2024-10-09 08:01:04.305363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.299 [2024-10-09 08:01:04.305395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.299 [2024-10-09 08:01:04.305430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.299 [2024-10-09 08:01:04.305451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.299 [2024-10-09 08:01:04.305471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.409321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.409390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.558 [2024-10-09 08:01:04.409409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.409421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.501354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.501425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.558 [2024-10-09 08:01:04.501445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.501457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.501573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.501594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.558 [2024-10-09 08:01:04.501607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.501618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.501665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.501680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.558 [2024-10-09 08:01:04.501699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.501710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.501831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.501850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.558 [2024-10-09 08:01:04.501863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.501874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.501921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.501938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:02.558 [2024-10-09 08:01:04.501956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.501967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.502012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.502028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.558 [2024-10-09 08:01:04.502040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.502051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.502101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.558 [2024-10-09 08:01:04.502117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.558 [2024-10-09 08:01:04.502146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.558 [2024-10-09 08:01:04.502157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.558 [2024-10-09 08:01:04.502293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.691 ms, result 0 00:21:03.932 00:21:03.932 00:21:03.932 08:01:05 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:04.191 [2024-10-09 08:01:05.981068] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:21:04.191 [2024-10-09 08:01:05.981224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77559 ] 00:21:04.191 [2024-10-09 08:01:06.145179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.449 [2024-10-09 08:01:06.330449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.708 [2024-10-09 08:01:06.644006] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.708 [2024-10-09 08:01:06.644080] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.968 [2024-10-09 08:01:06.811085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.811142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.968 [2024-10-09 08:01:06.811163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.968 [2024-10-09 08:01:06.811176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.811251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.811271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.968 [2024-10-09 08:01:06.811284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:04.968 [2024-10-09 08:01:06.811295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.811325] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.968 [2024-10-09 08:01:06.812273] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.968 [2024-10-09 08:01:06.812316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.812344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.968 [2024-10-09 08:01:06.812360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:21:04.968 [2024-10-09 08:01:06.812372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.813475] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:04.968 [2024-10-09 08:01:06.829716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.829768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:04.968 [2024-10-09 08:01:06.829786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.242 ms 00:21:04.968 [2024-10-09 08:01:06.829798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.829870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.829891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:04.968 [2024-10-09 08:01:06.829903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:04.968 [2024-10-09 08:01:06.829914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.834185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.834229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.968 [2024-10-09 08:01:06.834245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.176 ms 00:21:04.968 [2024-10-09 08:01:06.834257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.834372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.834394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.968 [2024-10-09 08:01:06.834407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:04.968 [2024-10-09 08:01:06.834418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.834485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.834502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.968 [2024-10-09 08:01:06.834515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:04.968 [2024-10-09 08:01:06.834526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.834562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.968 [2024-10-09 08:01:06.838772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.838811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.968 [2024-10-09 08:01:06.838826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:21:04.968 [2024-10-09 08:01:06.838837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.838877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.838891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.968 [2024-10-09 08:01:06.838903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:04.968 [2024-10-09 08:01:06.838914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.838968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:04.968 [2024-10-09 08:01:06.838998] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:04.968 [2024-10-09 08:01:06.839042] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:04.968 [2024-10-09 08:01:06.839063] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:04.968 [2024-10-09 08:01:06.839175] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.968 [2024-10-09 08:01:06.839190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.968 [2024-10-09 08:01:06.839205] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.968 [2024-10-09 08:01:06.839224] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839237] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839249] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:04.968 [2024-10-09 08:01:06.839261] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.968 [2024-10-09 08:01:06.839271] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.968 [2024-10-09 08:01:06.839282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.968 [2024-10-09 08:01:06.839293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.839304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.968 [2024-10-09 08:01:06.839316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:21:04.968 [2024-10-09 08:01:06.839328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.839454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.968 [2024-10-09 08:01:06.839475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.968 [2024-10-09 08:01:06.839487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:04.968 [2024-10-09 08:01:06.839498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.968 [2024-10-09 08:01:06.839618] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.968 [2024-10-09 08:01:06.839649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.968 [2024-10-09 08:01:06.839662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.968 [2024-10-09 08:01:06.839696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.968 [2024-10-09 08:01:06.839730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.968 [2024-10-09 08:01:06.839750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.968 [2024-10-09 08:01:06.839761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:04.968 [2024-10-09 08:01:06.839771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.968 [2024-10-09 08:01:06.839794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.968 [2024-10-09 08:01:06.839806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:04.968 [2024-10-09 08:01:06.839816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.968 [2024-10-09 08:01:06.839838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.968 [2024-10-09 08:01:06.839868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.968 [2024-10-09 08:01:06.839900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.968 [2024-10-09 08:01:06.839930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.968 [2024-10-09 08:01:06.839961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:04.968 [2024-10-09 08:01:06.839971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.968 [2024-10-09 08:01:06.839982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.968 [2024-10-09 08:01:06.839993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:04.969 [2024-10-09 08:01:06.840003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.969 [2024-10-09 08:01:06.840013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.969 [2024-10-09 08:01:06.840024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:04.969 [2024-10-09 08:01:06.840033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.969 [2024-10-09 08:01:06.840044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.969 [2024-10-09 08:01:06.840054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:04.969 [2024-10-09 08:01:06.840064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.969 [2024-10-09 08:01:06.840074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.969 [2024-10-09 08:01:06.840085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:04.969 [2024-10-09 08:01:06.840097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.969 [2024-10-09 08:01:06.840107] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.969 [2024-10-09 08:01:06.840119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.969 [2024-10-09 08:01:06.840134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.969 [2024-10-09 08:01:06.840145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.969 [2024-10-09 08:01:06.840157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.969 [2024-10-09 08:01:06.840168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.969 [2024-10-09 08:01:06.840178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.969 [2024-10-09 08:01:06.840189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.969 [2024-10-09 08:01:06.840199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.969 [2024-10-09 08:01:06.840210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.969 [2024-10-09 08:01:06.840222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.969 [2024-10-09 08:01:06.840236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.969 [2024-10-09 08:01:06.840249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:04.969 [2024-10-09 08:01:06.840260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:04.969 [2024-10-09 08:01:06.840271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:04.969 [2024-10-09 08:01:06.840282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:04.969 [2024-10-09 08:01:06.840294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:04.969 [2024-10-09 08:01:06.840305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:04.969 [2024-10-09 08:01:06.840316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:04.969 [2024-10-09 08:01:06.840327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:04.969 [2024-10-09 08:01:06.840725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:04.969 [2024-10-09 08:01:06.840788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:04.969 [2024-10-09 08:01:06.840944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:04.969 [2024-10-09 08:01:06.841006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:04.969 [2024-10-09 08:01:06.841132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:04.969 [2024-10-09 08:01:06.841203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:04.969 [2024-10-09 08:01:06.841265] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.969 [2024-10-09 08:01:06.841415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.969 [2024-10-09 08:01:06.841475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.969 [2024-10-09 08:01:06.841662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.969 [2024-10-09 08:01:06.841802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.969 [2024-10-09 08:01:06.841932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.969 [2024-10-09 08:01:06.842003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.842107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.969 [2024-10-09 08:01:06.842159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:21:04.969 [2024-10-09 08:01:06.842198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.885497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.885705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.969 [2024-10-09 08:01:06.885737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.064 ms 00:21:04.969 [2024-10-09 08:01:06.885751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.885881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.885898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:04.969 [2024-10-09 08:01:06.885911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:04.969 [2024-10-09 08:01:06.885922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.926180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.926238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.969 [2024-10-09 08:01:06.926262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.160 ms 00:21:04.969 [2024-10-09 08:01:06.926284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.926370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.926389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.969 [2024-10-09 08:01:06.926402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.969 [2024-10-09 08:01:06.926412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.926811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.926837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.969 [2024-10-09 08:01:06.926851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:21:04.969 [2024-10-09 08:01:06.926867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.927021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.927039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.969 [2024-10-09 08:01:06.927051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:04.969 [2024-10-09 08:01:06.927063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.943112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.943157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.969 [2024-10-09 08:01:06.943175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.023 ms 00:21:04.969 [2024-10-09 08:01:06.943187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.969 [2024-10-09 08:01:06.959571] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:04.969 [2024-10-09 08:01:06.959617] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:04.969 [2024-10-09 08:01:06.959645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.969 [2024-10-09 08:01:06.959658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:04.969 [2024-10-09 08:01:06.959671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.294 ms 00:21:04.969 [2024-10-09 08:01:06.959682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.228 [2024-10-09 08:01:06.989505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.228 [2024-10-09 08:01:06.989551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:05.228 [2024-10-09 08:01:06.989570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.774 ms 00:21:05.228 [2024-10-09 08:01:06.989582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.228 [2024-10-09 08:01:07.005319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.228 [2024-10-09 08:01:07.005377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:05.228 [2024-10-09 08:01:07.005394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.676 ms 00:21:05.228 [2024-10-09 08:01:07.005406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.228 [2024-10-09 08:01:07.020965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.228 [2024-10-09 08:01:07.021135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:05.228 [2024-10-09 08:01:07.021164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.497 ms 00:21:05.228 [2024-10-09 08:01:07.021177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.228 [2024-10-09 08:01:07.022038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.228 [2024-10-09 08:01:07.022072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.228 [2024-10-09 08:01:07.022087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:21:05.228 [2024-10-09 08:01:07.022098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.228 [2024-10-09 08:01:07.094693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.228 [2024-10-09 08:01:07.094764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:05.228 [2024-10-09 08:01:07.094786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.569 ms 00:21:05.228 [2024-10-09 08:01:07.094798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.107371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:05.229 [2024-10-09 08:01:07.110039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.229 [2024-10-09 08:01:07.110076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:05.229 [2024-10-09 08:01:07.110093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.159 ms 00:21:05.229 [2024-10-09 08:01:07.110111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.110223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.229 [2024-10-09 08:01:07.110243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:05.229 [2024-10-09 08:01:07.110256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:05.229 [2024-10-09 08:01:07.110268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.110380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.229 [2024-10-09 08:01:07.110401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.229 [2024-10-09 08:01:07.110414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:05.229 [2024-10-09 08:01:07.110425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.110464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.229 [2024-10-09 08:01:07.110480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:05.229 [2024-10-09 08:01:07.110492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:05.229 [2024-10-09 08:01:07.110502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.110544] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:05.229 [2024-10-09 08:01:07.110560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.229 [2024-10-09 08:01:07.110572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:05.229 [2024-10-09 08:01:07.110584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:05.229 [2024-10-09 08:01:07.110610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.141643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.229 [2024-10-09 08:01:07.141810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:05.229 [2024-10-09 08:01:07.141841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.008 ms 00:21:05.229 [2024-10-09 08:01:07.141854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.141948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.229 [2024-10-09 08:01:07.141968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:05.229 [2024-10-09 08:01:07.141981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:05.229 [2024-10-09 08:01:07.141992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.229 [2024-10-09 08:01:07.143167] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.589 ms, result 0 00:21:06.604  [2024-10-09T08:01:09.546Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-09T08:01:10.479Z] Copying: 54/1024 [MB] (26 MBps) [2024-10-09T08:01:11.415Z] Copying: 82/1024 [MB] (27 MBps) [2024-10-09T08:01:12.789Z] Copying: 107/1024 [MB] (25 MBps) [2024-10-09T08:01:13.722Z] Copying: 135/1024 [MB] (27 MBps) [2024-10-09T08:01:14.656Z] Copying: 161/1024 [MB] (25 MBps) [2024-10-09T08:01:15.592Z] Copying: 186/1024 [MB] (25 MBps) [2024-10-09T08:01:16.528Z] Copying: 213/1024 [MB] (27 MBps) [2024-10-09T08:01:17.462Z] Copying: 241/1024 [MB] (27 MBps) [2024-10-09T08:01:18.397Z] Copying: 266/1024 [MB] (25 MBps) [2024-10-09T08:01:19.771Z] Copying: 293/1024 [MB] (27 MBps) [2024-10-09T08:01:20.706Z] Copying: 319/1024 [MB] (26 MBps) [2024-10-09T08:01:21.640Z] Copying: 346/1024 [MB] (26 MBps) [2024-10-09T08:01:22.572Z] Copying: 372/1024 [MB] (26 MBps) [2024-10-09T08:01:23.507Z] Copying: 398/1024 [MB] (25 MBps) [2024-10-09T08:01:24.442Z] Copying: 425/1024 [MB] (26 MBps) [2024-10-09T08:01:25.378Z] Copying: 450/1024 [MB] (25 MBps) [2024-10-09T08:01:26.755Z] Copying: 476/1024 [MB] (26 MBps) [2024-10-09T08:01:27.691Z] Copying: 503/1024 [MB] (26 MBps) [2024-10-09T08:01:28.627Z] Copying: 529/1024 [MB] (26 MBps) [2024-10-09T08:01:29.563Z] Copying: 555/1024 [MB] (25 MBps) [2024-10-09T08:01:30.497Z] Copying: 581/1024 [MB] (25 MBps) [2024-10-09T08:01:31.430Z] Copying: 607/1024 [MB] (26 MBps) [2024-10-09T08:01:32.367Z] Copying: 633/1024 [MB] (25 MBps) [2024-10-09T08:01:33.743Z] Copying: 659/1024 [MB] (25 MBps) [2024-10-09T08:01:34.683Z] Copying: 685/1024 [MB] (26 MBps) [2024-10-09T08:01:35.617Z] Copying: 711/1024 [MB] (26 MBps) [2024-10-09T08:01:36.563Z] Copying: 738/1024 [MB] (26 MBps) [2024-10-09T08:01:37.498Z] Copying: 765/1024 [MB] (26 MBps) [2024-10-09T08:01:38.433Z] Copying: 792/1024 [MB] (26 MBps) [2024-10-09T08:01:39.367Z] Copying: 819/1024 [MB] (27 MBps) [2024-10-09T08:01:40.743Z] Copying: 846/1024 [MB] (26 MBps) [2024-10-09T08:01:41.705Z] Copying: 873/1024 [MB] (27 MBps) [2024-10-09T08:01:42.639Z] Copying: 900/1024 [MB] (27 MBps) [2024-10-09T08:01:43.573Z] Copying: 925/1024 [MB] (25 MBps) [2024-10-09T08:01:44.507Z] Copying: 951/1024 [MB] (25 MBps) [2024-10-09T08:01:45.442Z] Copying: 977/1024 [MB] (26 MBps) [2024-10-09T08:01:46.376Z] Copying: 1003/1024 [MB] (25 MBps) [2024-10-09T08:01:47.314Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-09 08:01:46.998104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:46.998373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:45.302 [2024-10-09 08:01:46.998408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:45.302 [2024-10-09 08:01:46.998422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:46.998470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:45.302 [2024-10-09 08:01:47.001872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.002025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:45.302 [2024-10-09 08:01:47.002053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:21:45.302 [2024-10-09 08:01:47.002066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.002315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.002354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:45.302 [2024-10-09 08:01:47.002371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:21:45.302 [2024-10-09 08:01:47.002382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.006495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.006522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:45.302 [2024-10-09 08:01:47.006535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.085 ms 00:21:45.302 [2024-10-09 08:01:47.006546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.013962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.013998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:45.302 [2024-10-09 08:01:47.014012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.391 ms 00:21:45.302 [2024-10-09 08:01:47.014040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.049038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.049301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:45.302 [2024-10-09 08:01:47.049456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.893 ms 00:21:45.302 [2024-10-09 08:01:47.049482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.067684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.067910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:45.302 [2024-10-09 08:01:47.068035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.090 ms 00:21:45.302 [2024-10-09 08:01:47.068162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.068392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.068561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:45.302 [2024-10-09 08:01:47.068686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:21:45.302 [2024-10-09 08:01:47.068737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.100217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.100408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:45.302 [2024-10-09 08:01:47.100437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.361 ms 00:21:45.302 [2024-10-09 08:01:47.100450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.131692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.131747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:45.302 [2024-10-09 08:01:47.131766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.192 ms 00:21:45.302 [2024-10-09 08:01:47.131777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.168044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.168107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:45.302 [2024-10-09 08:01:47.168126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.209 ms 00:21:45.302 [2024-10-09 08:01:47.168149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.199116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.302 [2024-10-09 08:01:47.199179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:45.302 [2024-10-09 08:01:47.199198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.839 ms 00:21:45.302 [2024-10-09 08:01:47.199210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.302 [2024-10-09 08:01:47.199263] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:45.302 [2024-10-09 08:01:47.199288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:45.302 [2024-10-09 08:01:47.199440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.199990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:45.303 [2024-10-09 08:01:47.200501] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:45.303 [2024-10-09 08:01:47.200513] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b8840b3-27ea-4ae9-a311-de7b8f0c5f0b 00:21:45.303 [2024-10-09 08:01:47.200525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:45.303 [2024-10-09 08:01:47.200536] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:45.303 [2024-10-09 08:01:47.200547] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:45.303 [2024-10-09 08:01:47.200558] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:45.303 [2024-10-09 08:01:47.200568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:45.303 [2024-10-09 08:01:47.200579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:45.303 [2024-10-09 08:01:47.200599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:45.304 [2024-10-09 08:01:47.200610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:45.304 [2024-10-09 08:01:47.200619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:45.304 [2024-10-09 08:01:47.200630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.304 [2024-10-09 08:01:47.200653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:45.304 [2024-10-09 08:01:47.200666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:21:45.304 [2024-10-09 08:01:47.200677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.304 [2024-10-09 08:01:47.217442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.304 [2024-10-09 08:01:47.217506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:45.304 [2024-10-09 08:01:47.217526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.713 ms 00:21:45.304 [2024-10-09 08:01:47.217547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.304 [2024-10-09 08:01:47.218027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.304 [2024-10-09 08:01:47.218069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:45.304 [2024-10-09 08:01:47.218084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:21:45.304 [2024-10-09 08:01:47.218095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.304 [2024-10-09 08:01:47.255163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.304 [2024-10-09 08:01:47.255226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.304 [2024-10-09 08:01:47.255244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.304 [2024-10-09 08:01:47.255263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.304 [2024-10-09 08:01:47.255354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.304 [2024-10-09 08:01:47.255371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.304 [2024-10-09 08:01:47.255384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.304 [2024-10-09 08:01:47.255395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.304 [2024-10-09 08:01:47.255482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.304 [2024-10-09 08:01:47.255501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.304 [2024-10-09 08:01:47.255513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.304 [2024-10-09 08:01:47.255525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.304 [2024-10-09 08:01:47.255554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.304 [2024-10-09 08:01:47.255568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.304 [2024-10-09 08:01:47.255579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.304 [2024-10-09 08:01:47.255589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.359041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.359114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.563 [2024-10-09 08:01:47.359133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.359151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.443398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.443466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.563 [2024-10-09 08:01:47.443485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.443497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.443594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.443612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.563 [2024-10-09 08:01:47.443624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.443652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.443702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.443725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.563 [2024-10-09 08:01:47.443737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.443748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.443872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.443893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.563 [2024-10-09 08:01:47.443905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.443917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.443963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.443989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:45.563 [2024-10-09 08:01:47.444001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.444012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.444056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.444071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.563 [2024-10-09 08:01:47.444083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.444093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.444143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.563 [2024-10-09 08:01:47.444165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.563 [2024-10-09 08:01:47.444178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.563 [2024-10-09 08:01:47.444188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.563 [2024-10-09 08:01:47.444323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.188 ms, result 0 00:21:46.499 00:21:46.499 00:21:46.757 08:01:48 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:48.721 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:48.721 08:01:50 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:48.979 [2024-10-09 08:01:50.803449] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:21:48.979 [2024-10-09 08:01:50.803821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78009 ] 00:21:48.979 [2024-10-09 08:01:50.966995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.237 [2024-10-09 08:01:51.193925] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.496 [2024-10-09 08:01:51.506542] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:49.496 [2024-10-09 08:01:51.506623] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:49.756 [2024-10-09 08:01:51.668219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.756 [2024-10-09 08:01:51.668280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:49.756 [2024-10-09 08:01:51.668302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:49.756 [2024-10-09 08:01:51.668315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.756 [2024-10-09 08:01:51.668408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.756 [2024-10-09 08:01:51.668429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.756 [2024-10-09 08:01:51.668443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:49.756 [2024-10-09 08:01:51.668454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.756 [2024-10-09 08:01:51.668488] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:49.756 [2024-10-09 08:01:51.669450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:49.756 [2024-10-09 08:01:51.669490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.756 [2024-10-09 08:01:51.669507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.756 [2024-10-09 08:01:51.669525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:21:49.756 [2024-10-09 08:01:51.669537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.756 [2024-10-09 08:01:51.670759] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:49.756 [2024-10-09 08:01:51.687115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.756 [2024-10-09 08:01:51.687169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:49.756 [2024-10-09 08:01:51.687189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.358 ms 00:21:49.756 [2024-10-09 08:01:51.687201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.756 [2024-10-09 08:01:51.687284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.756 [2024-10-09 08:01:51.687306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:49.756 [2024-10-09 08:01:51.687320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:49.756 [2024-10-09 08:01:51.687352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.756 [2024-10-09 08:01:51.691861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.756 [2024-10-09 08:01:51.691913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.756 [2024-10-09 08:01:51.691930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.395 ms 00:21:49.756 [2024-10-09 08:01:51.691942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.757 [2024-10-09 08:01:51.692047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.757 [2024-10-09 08:01:51.692068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.757 [2024-10-09 08:01:51.692081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:49.757 [2024-10-09 08:01:51.692092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.757 [2024-10-09 08:01:51.692168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.757 [2024-10-09 08:01:51.692194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:49.757 [2024-10-09 08:01:51.692207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:49.757 [2024-10-09 08:01:51.692218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.757 [2024-10-09 08:01:51.692254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:49.757 [2024-10-09 08:01:51.696625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.757 [2024-10-09 08:01:51.696668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.757 [2024-10-09 08:01:51.696684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:21:49.757 [2024-10-09 08:01:51.696696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.757 [2024-10-09 08:01:51.696736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.757 [2024-10-09 08:01:51.696753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:49.757 [2024-10-09 08:01:51.696766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:49.757 [2024-10-09 08:01:51.696777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.757 [2024-10-09 08:01:51.696832] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:49.757 [2024-10-09 08:01:51.696865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:49.757 [2024-10-09 08:01:51.696909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:49.757 [2024-10-09 08:01:51.696929] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:49.757 [2024-10-09 08:01:51.697043] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:49.757 [2024-10-09 08:01:51.697058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:49.757 [2024-10-09 08:01:51.697073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:49.757 [2024-10-09 08:01:51.697093] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697118] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:49.757 [2024-10-09 08:01:51.697129] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:49.757 [2024-10-09 08:01:51.697140] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:49.757 [2024-10-09 08:01:51.697150] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:49.757 [2024-10-09 08:01:51.697162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.757 [2024-10-09 08:01:51.697174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:49.757 [2024-10-09 08:01:51.697185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:21:49.757 [2024-10-09 08:01:51.697196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.757 [2024-10-09 08:01:51.697302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.757 [2024-10-09 08:01:51.697324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:49.757 [2024-10-09 08:01:51.697361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:49.757 [2024-10-09 08:01:51.697374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.757 [2024-10-09 08:01:51.697534] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:49.757 [2024-10-09 08:01:51.697560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:49.757 [2024-10-09 08:01:51.697579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:49.757 [2024-10-09 08:01:51.697631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:49.757 [2024-10-09 08:01:51.697663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.757 [2024-10-09 08:01:51.697688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:49.757 [2024-10-09 08:01:51.697706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:49.757 [2024-10-09 08:01:51.697718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.757 [2024-10-09 08:01:51.697744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:49.757 [2024-10-09 08:01:51.697755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:49.757 [2024-10-09 08:01:51.697770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:49.757 [2024-10-09 08:01:51.697796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:49.757 [2024-10-09 08:01:51.697853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:49.757 [2024-10-09 08:01:51.697905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:49.757 [2024-10-09 08:01:51.697936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:49.757 [2024-10-09 08:01:51.697966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:49.757 [2024-10-09 08:01:51.697976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.757 [2024-10-09 08:01:51.697988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:49.757 [2024-10-09 08:01:51.698004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:49.757 [2024-10-09 08:01:51.698015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.757 [2024-10-09 08:01:51.698029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:49.757 [2024-10-09 08:01:51.698048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:49.757 [2024-10-09 08:01:51.698060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.757 [2024-10-09 08:01:51.698070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:49.757 [2024-10-09 08:01:51.698081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:49.757 [2024-10-09 08:01:51.698096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.757 [2024-10-09 08:01:51.698107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:49.757 [2024-10-09 08:01:51.698117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:49.757 [2024-10-09 08:01:51.698127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.757 [2024-10-09 08:01:51.698140] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:49.757 [2024-10-09 08:01:51.698160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:49.757 [2024-10-09 08:01:51.698183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.757 [2024-10-09 08:01:51.698194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.757 [2024-10-09 08:01:51.698206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:49.757 [2024-10-09 08:01:51.698222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:49.757 [2024-10-09 08:01:51.698242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:49.757 [2024-10-09 08:01:51.698255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:49.757 [2024-10-09 08:01:51.698273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:49.757 [2024-10-09 08:01:51.698289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:49.757 [2024-10-09 08:01:51.698302] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:49.757 [2024-10-09 08:01:51.698316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.757 [2024-10-09 08:01:51.698345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:49.757 [2024-10-09 08:01:51.698360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:49.757 [2024-10-09 08:01:51.698372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:49.757 [2024-10-09 08:01:51.698383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:49.757 [2024-10-09 08:01:51.698399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:49.757 [2024-10-09 08:01:51.698413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:49.757 [2024-10-09 08:01:51.698424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:49.757 [2024-10-09 08:01:51.698435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:49.757 [2024-10-09 08:01:51.698448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:49.758 [2024-10-09 08:01:51.698469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:49.758 [2024-10-09 08:01:51.698488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:49.758 [2024-10-09 08:01:51.698502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:49.758 [2024-10-09 08:01:51.698513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:49.758 [2024-10-09 08:01:51.698524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:49.758 [2024-10-09 08:01:51.698539] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:49.758 [2024-10-09 08:01:51.698559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.758 [2024-10-09 08:01:51.698580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:49.758 [2024-10-09 08:01:51.698595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:49.758 [2024-10-09 08:01:51.698609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:49.758 [2024-10-09 08:01:51.698629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:49.758 [2024-10-09 08:01:51.698645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.758 [2024-10-09 08:01:51.698659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:49.758 [2024-10-09 08:01:51.698675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:21:49.758 [2024-10-09 08:01:51.698687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.758 [2024-10-09 08:01:51.739772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.758 [2024-10-09 08:01:51.739832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:49.758 [2024-10-09 08:01:51.739854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.000 ms 00:21:49.758 [2024-10-09 08:01:51.739866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.758 [2024-10-09 08:01:51.739997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.758 [2024-10-09 08:01:51.740015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:49.758 [2024-10-09 08:01:51.740028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:49.758 [2024-10-09 08:01:51.740040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.780550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.780613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.016 [2024-10-09 08:01:51.780640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.403 ms 00:21:50.016 [2024-10-09 08:01:51.780653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.780732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.780750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.016 [2024-10-09 08:01:51.780763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:50.016 [2024-10-09 08:01:51.780774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.781193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.781214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.016 [2024-10-09 08:01:51.781227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:21:50.016 [2024-10-09 08:01:51.781245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.781429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.781452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.016 [2024-10-09 08:01:51.781465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:21:50.016 [2024-10-09 08:01:51.781476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.797695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.797748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.016 [2024-10-09 08:01:51.797767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.188 ms 00:21:50.016 [2024-10-09 08:01:51.797779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.814410] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:50.016 [2024-10-09 08:01:51.814612] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:50.016 [2024-10-09 08:01:51.814638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.814651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:50.016 [2024-10-09 08:01:51.814667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.708 ms 00:21:50.016 [2024-10-09 08:01:51.814688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.844539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.844589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:50.016 [2024-10-09 08:01:51.844608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.797 ms 00:21:50.016 [2024-10-09 08:01:51.844620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.860556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.860605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:50.016 [2024-10-09 08:01:51.860624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.872 ms 00:21:50.016 [2024-10-09 08:01:51.860636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.877625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.877845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:50.016 [2024-10-09 08:01:51.877891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.918 ms 00:21:50.016 [2024-10-09 08:01:51.877913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.879033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.879079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:50.016 [2024-10-09 08:01:51.879097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:21:50.016 [2024-10-09 08:01:51.879109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.016 [2024-10-09 08:01:51.952821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.016 [2024-10-09 08:01:51.953063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:50.016 [2024-10-09 08:01:51.953096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.673 ms 00:21:50.016 [2024-10-09 08:01:51.953110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:51.965819] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:50.017 [2024-10-09 08:01:51.968454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.017 [2024-10-09 08:01:51.968495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:50.017 [2024-10-09 08:01:51.968513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.263 ms 00:21:50.017 [2024-10-09 08:01:51.968532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:51.968656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.017 [2024-10-09 08:01:51.968679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:50.017 [2024-10-09 08:01:51.968692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:50.017 [2024-10-09 08:01:51.968704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:51.968804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.017 [2024-10-09 08:01:51.968825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:50.017 [2024-10-09 08:01:51.968838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:50.017 [2024-10-09 08:01:51.968848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:51.968890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.017 [2024-10-09 08:01:51.968908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:50.017 [2024-10-09 08:01:51.968921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:50.017 [2024-10-09 08:01:51.968931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:51.968975] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:50.017 [2024-10-09 08:01:51.968994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.017 [2024-10-09 08:01:51.969007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:50.017 [2024-10-09 08:01:51.969019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:50.017 [2024-10-09 08:01:51.969034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:52.000455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.017 [2024-10-09 08:01:52.000508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:50.017 [2024-10-09 08:01:52.000527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.394 ms 00:21:50.017 [2024-10-09 08:01:52.000539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:52.000632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.017 [2024-10-09 08:01:52.000653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:50.017 [2024-10-09 08:01:52.000666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:50.017 [2024-10-09 08:01:52.000678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.017 [2024-10-09 08:01:52.001935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.103 ms, result 0 00:21:51.392  [2024-10-09T08:01:54.339Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-09T08:01:55.275Z] Copying: 56/1024 [MB] (28 MBps) [2024-10-09T08:01:56.210Z] Copying: 82/1024 [MB] (26 MBps) [2024-10-09T08:01:57.172Z] Copying: 110/1024 [MB] (28 MBps) [2024-10-09T08:01:58.107Z] Copying: 138/1024 [MB] (27 MBps) [2024-10-09T08:01:59.041Z] Copying: 166/1024 [MB] (27 MBps) [2024-10-09T08:02:00.415Z] Copying: 195/1024 [MB] (28 MBps) [2024-10-09T08:02:01.349Z] Copying: 222/1024 [MB] (27 MBps) [2024-10-09T08:02:02.284Z] Copying: 250/1024 [MB] (27 MBps) [2024-10-09T08:02:03.238Z] Copying: 279/1024 [MB] (28 MBps) [2024-10-09T08:02:04.173Z] Copying: 305/1024 [MB] (26 MBps) [2024-10-09T08:02:05.107Z] Copying: 332/1024 [MB] (27 MBps) [2024-10-09T08:02:06.043Z] Copying: 361/1024 [MB] (28 MBps) [2024-10-09T08:02:07.420Z] Copying: 390/1024 [MB] (29 MBps) [2024-10-09T08:02:08.357Z] Copying: 418/1024 [MB] (28 MBps) [2024-10-09T08:02:09.292Z] Copying: 446/1024 [MB] (27 MBps) [2024-10-09T08:02:10.263Z] Copying: 474/1024 [MB] (27 MBps) [2024-10-09T08:02:11.198Z] Copying: 502/1024 [MB] (28 MBps) [2024-10-09T08:02:12.133Z] Copying: 531/1024 [MB] (28 MBps) [2024-10-09T08:02:13.066Z] Copying: 559/1024 [MB] (27 MBps) [2024-10-09T08:02:14.441Z] Copying: 587/1024 [MB] (28 MBps) [2024-10-09T08:02:15.375Z] Copying: 615/1024 [MB] (27 MBps) [2024-10-09T08:02:16.310Z] Copying: 644/1024 [MB] (29 MBps) [2024-10-09T08:02:17.245Z] Copying: 673/1024 [MB] (29 MBps) [2024-10-09T08:02:18.180Z] Copying: 701/1024 [MB] (27 MBps) [2024-10-09T08:02:19.114Z] Copying: 730/1024 [MB] (29 MBps) [2024-10-09T08:02:20.048Z] Copying: 760/1024 [MB] (30 MBps) [2024-10-09T08:02:21.423Z] Copying: 791/1024 [MB] (30 MBps) [2024-10-09T08:02:22.042Z] Copying: 821/1024 [MB] (29 MBps) [2024-10-09T08:02:23.418Z] Copying: 850/1024 [MB] (29 MBps) [2024-10-09T08:02:24.354Z] Copying: 877/1024 [MB] (27 MBps) [2024-10-09T08:02:25.288Z] Copying: 906/1024 [MB] (28 MBps) [2024-10-09T08:02:26.222Z] Copying: 933/1024 [MB] (27 MBps) [2024-10-09T08:02:27.183Z] Copying: 961/1024 [MB] (27 MBps) [2024-10-09T08:02:28.118Z] Copying: 988/1024 [MB] (26 MBps) [2024-10-09T08:02:29.052Z] Copying: 1015/1024 [MB] (27 MBps) [2024-10-09T08:02:29.619Z] Copying: 1048232/1048576 [kB] (8152 kBps) [2024-10-09T08:02:29.619Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-09 08:02:29.443392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.607 [2024-10-09 08:02:29.443700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:27.607 [2024-10-09 08:02:29.443871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:27.607 [2024-10-09 08:02:29.443934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.607 [2024-10-09 08:02:29.446931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:27.607 [2024-10-09 08:02:29.452535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.607 [2024-10-09 08:02:29.452587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:27.607 [2024-10-09 08:02:29.452611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.392 ms 00:22:27.607 [2024-10-09 08:02:29.452635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.607 [2024-10-09 08:02:29.468879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.607 [2024-10-09 08:02:29.468950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:27.607 [2024-10-09 08:02:29.468974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.443 ms 00:22:27.607 [2024-10-09 08:02:29.468989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.607 [2024-10-09 08:02:29.493984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.607 [2024-10-09 08:02:29.494315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:27.607 [2024-10-09 08:02:29.494368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.959 ms 00:22:27.607 [2024-10-09 08:02:29.494385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.607 [2024-10-09 08:02:29.502708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.607 [2024-10-09 08:02:29.502785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:27.607 [2024-10-09 08:02:29.502807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.229 ms 00:22:27.607 [2024-10-09 08:02:29.502820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.607 [2024-10-09 08:02:29.542731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.607 [2024-10-09 08:02:29.542818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:27.607 [2024-10-09 08:02:29.542843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.776 ms 00:22:27.607 [2024-10-09 08:02:29.542857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.607 [2024-10-09 08:02:29.564444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.607 [2024-10-09 08:02:29.564814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:27.607 [2024-10-09 08:02:29.564853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.492 ms 00:22:27.607 [2024-10-09 08:02:29.564871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.867 [2024-10-09 08:02:29.656154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.867 [2024-10-09 08:02:29.656450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:27.867 [2024-10-09 08:02:29.656499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.175 ms 00:22:27.867 [2024-10-09 08:02:29.656513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.867 [2024-10-09 08:02:29.688821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.867 [2024-10-09 08:02:29.688876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:27.867 [2024-10-09 08:02:29.688895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.269 ms 00:22:27.867 [2024-10-09 08:02:29.688907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.867 [2024-10-09 08:02:29.720194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.867 [2024-10-09 08:02:29.720251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:27.867 [2024-10-09 08:02:29.720271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.237 ms 00:22:27.867 [2024-10-09 08:02:29.720283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.867 [2024-10-09 08:02:29.751419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.867 [2024-10-09 08:02:29.751483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.867 [2024-10-09 08:02:29.751507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.086 ms 00:22:27.867 [2024-10-09 08:02:29.751529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.867 [2024-10-09 08:02:29.782765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.867 [2024-10-09 08:02:29.782825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:27.867 [2024-10-09 08:02:29.782844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.073 ms 00:22:27.867 [2024-10-09 08:02:29.782855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.867 [2024-10-09 08:02:29.782906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:27.867 [2024-10-09 08:02:29.782932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118528 / 261120 wr_cnt: 1 state: open 00:22:27.867 [2024-10-09 08:02:29.782947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.782959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.782971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.782983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.782995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:27.867 [2024-10-09 08:02:29.783225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.783995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:27.868 [2024-10-09 08:02:29.784142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:27.868 [2024-10-09 08:02:29.784154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b8840b3-27ea-4ae9-a311-de7b8f0c5f0b 00:22:27.868 [2024-10-09 08:02:29.784174] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118528 00:22:27.868 [2024-10-09 08:02:29.784184] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119488 00:22:27.868 [2024-10-09 08:02:29.784195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118528 00:22:27.868 [2024-10-09 08:02:29.784206] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:22:27.868 [2024-10-09 08:02:29.784217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:27.868 [2024-10-09 08:02:29.784228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:27.868 [2024-10-09 08:02:29.784239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:27.868 [2024-10-09 08:02:29.784249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:27.868 [2024-10-09 08:02:29.784259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:27.868 [2024-10-09 08:02:29.784271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.868 [2024-10-09 08:02:29.784300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:27.868 [2024-10-09 08:02:29.784313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:22:27.868 [2024-10-09 08:02:29.784324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.868 [2024-10-09 08:02:29.801080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.868 [2024-10-09 08:02:29.801173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:27.868 [2024-10-09 08:02:29.801192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.694 ms 00:22:27.868 [2024-10-09 08:02:29.801203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.868 [2024-10-09 08:02:29.801684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.868 [2024-10-09 08:02:29.801704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:27.868 [2024-10-09 08:02:29.801716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:22:27.868 [2024-10-09 08:02:29.801741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.868 [2024-10-09 08:02:29.839037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.868 [2024-10-09 08:02:29.839117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.868 [2024-10-09 08:02:29.839137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.868 [2024-10-09 08:02:29.839149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.869 [2024-10-09 08:02:29.839240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.869 [2024-10-09 08:02:29.839258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.869 [2024-10-09 08:02:29.839276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.869 [2024-10-09 08:02:29.839294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.869 [2024-10-09 08:02:29.839465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.869 [2024-10-09 08:02:29.839487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.869 [2024-10-09 08:02:29.839500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.869 [2024-10-09 08:02:29.839511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.869 [2024-10-09 08:02:29.839538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.869 [2024-10-09 08:02:29.839553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.869 [2024-10-09 08:02:29.839564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.869 [2024-10-09 08:02:29.839575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:29.943557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:29.943624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.128 [2024-10-09 08:02:29.943650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:29.943664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.028473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:30.028540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.128 [2024-10-09 08:02:30.028560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:30.028583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.028709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:30.028730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.128 [2024-10-09 08:02:30.028742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:30.028753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.028803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:30.028820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.128 [2024-10-09 08:02:30.028832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:30.028842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.028971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:30.028991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.128 [2024-10-09 08:02:30.029004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:30.029015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.029063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:30.029082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:28.128 [2024-10-09 08:02:30.029094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:30.029104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.029154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:30.029170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.128 [2024-10-09 08:02:30.029182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:30.029193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.029246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.128 [2024-10-09 08:02:30.029263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.128 [2024-10-09 08:02:30.029276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.128 [2024-10-09 08:02:30.029286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.128 [2024-10-09 08:02:30.029486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 591.616 ms, result 0 00:22:30.036 00:22:30.036 00:22:30.036 08:02:31 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:30.036 [2024-10-09 08:02:31.640320] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:22:30.036 [2024-10-09 08:02:31.640643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78413 ] 00:22:30.036 [2024-10-09 08:02:31.799056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.036 [2024-10-09 08:02:31.984056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.294 [2024-10-09 08:02:32.300472] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:30.294 [2024-10-09 08:02:32.300561] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:30.553 [2024-10-09 08:02:32.460957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.553 [2024-10-09 08:02:32.461027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:30.553 [2024-10-09 08:02:32.461049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:30.553 [2024-10-09 08:02:32.461062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.553 [2024-10-09 08:02:32.461139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.553 [2024-10-09 08:02:32.461158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:30.553 [2024-10-09 08:02:32.461171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:30.553 [2024-10-09 08:02:32.461182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.553 [2024-10-09 08:02:32.461214] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:30.553 [2024-10-09 08:02:32.462163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:30.553 [2024-10-09 08:02:32.462197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.553 [2024-10-09 08:02:32.462211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:30.553 [2024-10-09 08:02:32.462224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:22:30.553 [2024-10-09 08:02:32.462235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.553 [2024-10-09 08:02:32.463449] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:30.553 [2024-10-09 08:02:32.479761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.553 [2024-10-09 08:02:32.479815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:30.553 [2024-10-09 08:02:32.479835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.312 ms 00:22:30.553 [2024-10-09 08:02:32.479847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.553 [2024-10-09 08:02:32.479922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.553 [2024-10-09 08:02:32.479943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:30.554 [2024-10-09 08:02:32.479956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:30.554 [2024-10-09 08:02:32.479968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.484443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.554 [2024-10-09 08:02:32.484507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:30.554 [2024-10-09 08:02:32.484525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.377 ms 00:22:30.554 [2024-10-09 08:02:32.484537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.484651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.554 [2024-10-09 08:02:32.484672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:30.554 [2024-10-09 08:02:32.484685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:30.554 [2024-10-09 08:02:32.484698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.484785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.554 [2024-10-09 08:02:32.484804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:30.554 [2024-10-09 08:02:32.484817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:30.554 [2024-10-09 08:02:32.484828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.484863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:30.554 [2024-10-09 08:02:32.489189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.554 [2024-10-09 08:02:32.489428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:30.554 [2024-10-09 08:02:32.489460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.333 ms 00:22:30.554 [2024-10-09 08:02:32.489474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.489527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.554 [2024-10-09 08:02:32.489543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:30.554 [2024-10-09 08:02:32.489556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:30.554 [2024-10-09 08:02:32.489568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.489640] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:30.554 [2024-10-09 08:02:32.489674] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:30.554 [2024-10-09 08:02:32.489721] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:30.554 [2024-10-09 08:02:32.489742] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:30.554 [2024-10-09 08:02:32.489857] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:30.554 [2024-10-09 08:02:32.489873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:30.554 [2024-10-09 08:02:32.489888] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:30.554 [2024-10-09 08:02:32.489908] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:30.554 [2024-10-09 08:02:32.489922] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:30.554 [2024-10-09 08:02:32.489934] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:30.554 [2024-10-09 08:02:32.489946] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:30.554 [2024-10-09 08:02:32.489958] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:30.554 [2024-10-09 08:02:32.489969] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:30.554 [2024-10-09 08:02:32.489981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.554 [2024-10-09 08:02:32.489993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:30.554 [2024-10-09 08:02:32.490005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:22:30.554 [2024-10-09 08:02:32.490017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.490116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.554 [2024-10-09 08:02:32.490137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:30.554 [2024-10-09 08:02:32.490149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:30.554 [2024-10-09 08:02:32.490161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.554 [2024-10-09 08:02:32.490314] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:30.554 [2024-10-09 08:02:32.490351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:30.554 [2024-10-09 08:02:32.490367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:30.554 [2024-10-09 08:02:32.490402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:30.554 [2024-10-09 08:02:32.490437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:30.554 [2024-10-09 08:02:32.490459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:30.554 [2024-10-09 08:02:32.490469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:30.554 [2024-10-09 08:02:32.490480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:30.554 [2024-10-09 08:02:32.490504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:30.554 [2024-10-09 08:02:32.490516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:30.554 [2024-10-09 08:02:32.490527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:30.554 [2024-10-09 08:02:32.490551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:30.554 [2024-10-09 08:02:32.490583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:30.554 [2024-10-09 08:02:32.490614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:30.554 [2024-10-09 08:02:32.490646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:30.554 [2024-10-09 08:02:32.490677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:30.554 [2024-10-09 08:02:32.490710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:30.554 [2024-10-09 08:02:32.490730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:30.554 [2024-10-09 08:02:32.490742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:30.554 [2024-10-09 08:02:32.490752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:30.554 [2024-10-09 08:02:32.490763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:30.554 [2024-10-09 08:02:32.490773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:30.554 [2024-10-09 08:02:32.490784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:30.554 [2024-10-09 08:02:32.490805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:30.554 [2024-10-09 08:02:32.490815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:30.554 [2024-10-09 08:02:32.490839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:30.554 [2024-10-09 08:02:32.490855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:30.554 [2024-10-09 08:02:32.490878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:30.554 [2024-10-09 08:02:32.490890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:30.554 [2024-10-09 08:02:32.490900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:30.554 [2024-10-09 08:02:32.490911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:30.554 [2024-10-09 08:02:32.490922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:30.554 [2024-10-09 08:02:32.490932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:30.554 [2024-10-09 08:02:32.490946] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:30.554 [2024-10-09 08:02:32.490959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:30.554 [2024-10-09 08:02:32.490972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:30.554 [2024-10-09 08:02:32.490984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:30.554 [2024-10-09 08:02:32.491001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:30.554 [2024-10-09 08:02:32.491012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:30.554 [2024-10-09 08:02:32.491024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:30.554 [2024-10-09 08:02:32.491036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:30.554 [2024-10-09 08:02:32.491048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:30.554 [2024-10-09 08:02:32.491060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:30.554 [2024-10-09 08:02:32.491071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:30.554 [2024-10-09 08:02:32.491083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:30.555 [2024-10-09 08:02:32.491095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:30.555 [2024-10-09 08:02:32.491106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:30.555 [2024-10-09 08:02:32.491118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:30.555 [2024-10-09 08:02:32.491130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:30.555 [2024-10-09 08:02:32.491142] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:30.555 [2024-10-09 08:02:32.491155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:30.555 [2024-10-09 08:02:32.491169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:30.555 [2024-10-09 08:02:32.491181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:30.555 [2024-10-09 08:02:32.491193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:30.555 [2024-10-09 08:02:32.491205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:30.555 [2024-10-09 08:02:32.491218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.555 [2024-10-09 08:02:32.491229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:30.555 [2024-10-09 08:02:32.491241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:22:30.555 [2024-10-09 08:02:32.491252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.555 [2024-10-09 08:02:32.536522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.555 [2024-10-09 08:02:32.536782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:30.555 [2024-10-09 08:02:32.536931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.204 ms 00:22:30.555 [2024-10-09 08:02:32.536986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.555 [2024-10-09 08:02:32.537223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.555 [2024-10-09 08:02:32.537421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:30.555 [2024-10-09 08:02:32.537552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:30.555 [2024-10-09 08:02:32.537690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.578103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.578344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:30.814 [2024-10-09 08:02:32.578485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.267 ms 00:22:30.814 [2024-10-09 08:02:32.578610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.578724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.578812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:30.814 [2024-10-09 08:02:32.578921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:30.814 [2024-10-09 08:02:32.578974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.579497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.579642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:30.814 [2024-10-09 08:02:32.579772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:22:30.814 [2024-10-09 08:02:32.579875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.580083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.580142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:30.814 [2024-10-09 08:02:32.580347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:22:30.814 [2024-10-09 08:02:32.580403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.596712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.596960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:30.814 [2024-10-09 08:02:32.597079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.103 ms 00:22:30.814 [2024-10-09 08:02:32.597130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.613579] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:30.814 [2024-10-09 08:02:32.613772] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:30.814 [2024-10-09 08:02:32.613909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.614021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:30.814 [2024-10-09 08:02:32.614074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.530 ms 00:22:30.814 [2024-10-09 08:02:32.614169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.644060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.644241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:30.814 [2024-10-09 08:02:32.644378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.710 ms 00:22:30.814 [2024-10-09 08:02:32.644430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.660234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.660425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:30.814 [2024-10-09 08:02:32.660543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.683 ms 00:22:30.814 [2024-10-09 08:02:32.660593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.676279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.676455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:30.814 [2024-10-09 08:02:32.676585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.529 ms 00:22:30.814 [2024-10-09 08:02:32.676610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.677433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.677463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:30.814 [2024-10-09 08:02:32.677478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:22:30.814 [2024-10-09 08:02:32.677490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.750574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.750649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:30.814 [2024-10-09 08:02:32.750670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.056 ms 00:22:30.814 [2024-10-09 08:02:32.750684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.763432] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:30.814 [2024-10-09 08:02:32.766291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.766350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:30.814 [2024-10-09 08:02:32.766377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.529 ms 00:22:30.814 [2024-10-09 08:02:32.766390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.766521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.766542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:30.814 [2024-10-09 08:02:32.766555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:30.814 [2024-10-09 08:02:32.766567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.768189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.768242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:30.814 [2024-10-09 08:02:32.768259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:22:30.814 [2024-10-09 08:02:32.768270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.768321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.768356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:30.814 [2024-10-09 08:02:32.768371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:30.814 [2024-10-09 08:02:32.768383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.768429] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:30.814 [2024-10-09 08:02:32.768448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.768459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:30.814 [2024-10-09 08:02:32.768477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:30.814 [2024-10-09 08:02:32.768488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.799958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.800020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:30.814 [2024-10-09 08:02:32.800041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.436 ms 00:22:30.814 [2024-10-09 08:02:32.800053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.800150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.814 [2024-10-09 08:02:32.800170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:30.814 [2024-10-09 08:02:32.800184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:30.814 [2024-10-09 08:02:32.800200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.814 [2024-10-09 08:02:32.802042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.003 ms, result 0 00:22:32.190  [2024-10-09T08:02:35.136Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-09T08:02:36.088Z] Copying: 49/1024 [MB] (24 MBps) [2024-10-09T08:02:37.461Z] Copying: 77/1024 [MB] (28 MBps) [2024-10-09T08:02:38.027Z] Copying: 105/1024 [MB] (27 MBps) [2024-10-09T08:02:39.401Z] Copying: 131/1024 [MB] (26 MBps) [2024-10-09T08:02:40.335Z] Copying: 158/1024 [MB] (26 MBps) [2024-10-09T08:02:41.304Z] Copying: 186/1024 [MB] (27 MBps) [2024-10-09T08:02:42.262Z] Copying: 215/1024 [MB] (29 MBps) [2024-10-09T08:02:43.197Z] Copying: 241/1024 [MB] (26 MBps) [2024-10-09T08:02:44.132Z] Copying: 267/1024 [MB] (26 MBps) [2024-10-09T08:02:45.067Z] Copying: 294/1024 [MB] (26 MBps) [2024-10-09T08:02:46.442Z] Copying: 320/1024 [MB] (26 MBps) [2024-10-09T08:02:47.377Z] Copying: 347/1024 [MB] (26 MBps) [2024-10-09T08:02:48.311Z] Copying: 373/1024 [MB] (25 MBps) [2024-10-09T08:02:49.246Z] Copying: 398/1024 [MB] (25 MBps) [2024-10-09T08:02:50.180Z] Copying: 425/1024 [MB] (27 MBps) [2024-10-09T08:02:51.115Z] Copying: 450/1024 [MB] (24 MBps) [2024-10-09T08:02:52.049Z] Copying: 475/1024 [MB] (25 MBps) [2024-10-09T08:02:53.424Z] Copying: 501/1024 [MB] (25 MBps) [2024-10-09T08:02:54.358Z] Copying: 526/1024 [MB] (24 MBps) [2024-10-09T08:02:55.294Z] Copying: 549/1024 [MB] (23 MBps) [2024-10-09T08:02:56.230Z] Copying: 575/1024 [MB] (25 MBps) [2024-10-09T08:02:57.164Z] Copying: 600/1024 [MB] (24 MBps) [2024-10-09T08:02:58.098Z] Copying: 624/1024 [MB] (24 MBps) [2024-10-09T08:02:59.032Z] Copying: 649/1024 [MB] (25 MBps) [2024-10-09T08:03:00.403Z] Copying: 675/1024 [MB] (25 MBps) [2024-10-09T08:03:01.336Z] Copying: 698/1024 [MB] (23 MBps) [2024-10-09T08:03:02.272Z] Copying: 725/1024 [MB] (27 MBps) [2024-10-09T08:03:03.206Z] Copying: 752/1024 [MB] (26 MBps) [2024-10-09T08:03:04.138Z] Copying: 776/1024 [MB] (24 MBps) [2024-10-09T08:03:05.072Z] Copying: 801/1024 [MB] (24 MBps) [2024-10-09T08:03:06.460Z] Copying: 826/1024 [MB] (24 MBps) [2024-10-09T08:03:07.392Z] Copying: 852/1024 [MB] (26 MBps) [2024-10-09T08:03:08.326Z] Copying: 879/1024 [MB] (27 MBps) [2024-10-09T08:03:09.258Z] Copying: 906/1024 [MB] (26 MBps) [2024-10-09T08:03:10.192Z] Copying: 932/1024 [MB] (26 MBps) [2024-10-09T08:03:11.125Z] Copying: 961/1024 [MB] (28 MBps) [2024-10-09T08:03:12.058Z] Copying: 989/1024 [MB] (27 MBps) [2024-10-09T08:03:12.623Z] Copying: 1013/1024 [MB] (23 MBps) [2024-10-09T08:03:12.623Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-09 08:03:12.549485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.611 [2024-10-09 08:03:12.549558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:10.611 [2024-10-09 08:03:12.549582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:10.612 [2024-10-09 08:03:12.549597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.612 [2024-10-09 08:03:12.549633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:10.612 [2024-10-09 08:03:12.553944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.612 [2024-10-09 08:03:12.554125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:10.612 [2024-10-09 08:03:12.554349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:23:10.612 [2024-10-09 08:03:12.554518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.612 [2024-10-09 08:03:12.554882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.612 [2024-10-09 08:03:12.555064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:10.612 [2024-10-09 08:03:12.555223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:23:10.612 [2024-10-09 08:03:12.555287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.612 [2024-10-09 08:03:12.560641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.612 [2024-10-09 08:03:12.560842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:10.612 [2024-10-09 08:03:12.560999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.190 ms 00:23:10.612 [2024-10-09 08:03:12.561177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.612 [2024-10-09 08:03:12.569563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.612 [2024-10-09 08:03:12.569790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:10.612 [2024-10-09 08:03:12.569948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.250 ms 00:23:10.612 [2024-10-09 08:03:12.570009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.612 [2024-10-09 08:03:12.608811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.612 [2024-10-09 08:03:12.609120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:10.612 [2024-10-09 08:03:12.609259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.569 ms 00:23:10.612 [2024-10-09 08:03:12.609320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.870 [2024-10-09 08:03:12.630894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.870 [2024-10-09 08:03:12.630985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:10.870 [2024-10-09 08:03:12.631010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.362 ms 00:23:10.870 [2024-10-09 08:03:12.631025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.870 [2024-10-09 08:03:12.724881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.870 [2024-10-09 08:03:12.725021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:10.871 [2024-10-09 08:03:12.725049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.753 ms 00:23:10.871 [2024-10-09 08:03:12.725074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.871 [2024-10-09 08:03:12.763728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.871 [2024-10-09 08:03:12.763799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:10.871 [2024-10-09 08:03:12.763822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.621 ms 00:23:10.871 [2024-10-09 08:03:12.763836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.871 [2024-10-09 08:03:12.801735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.871 [2024-10-09 08:03:12.801809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:10.871 [2024-10-09 08:03:12.801833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.829 ms 00:23:10.871 [2024-10-09 08:03:12.801847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.871 [2024-10-09 08:03:12.839545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.871 [2024-10-09 08:03:12.839619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:10.871 [2024-10-09 08:03:12.839642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.631 ms 00:23:10.871 [2024-10-09 08:03:12.839668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.871 [2024-10-09 08:03:12.877419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.871 [2024-10-09 08:03:12.877695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:10.871 [2024-10-09 08:03:12.877733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.607 ms 00:23:10.871 [2024-10-09 08:03:12.877749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.871 [2024-10-09 08:03:12.877818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:10.871 [2024-10-09 08:03:12.877848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:10.871 [2024-10-09 08:03:12.877866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.877999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.878987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:10.871 [2024-10-09 08:03:12.879002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:10.872 [2024-10-09 08:03:12.879373] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:10.872 [2024-10-09 08:03:12.879396] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b8840b3-27ea-4ae9-a311-de7b8f0c5f0b 00:23:10.872 [2024-10-09 08:03:12.879421] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:10.872 [2024-10-09 08:03:12.879435] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13504 00:23:10.872 [2024-10-09 08:03:12.879448] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12544 00:23:10.872 [2024-10-09 08:03:12.879462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0765 00:23:10.872 [2024-10-09 08:03:12.879476] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:10.872 [2024-10-09 08:03:12.879490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:10.872 [2024-10-09 08:03:12.879503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:10.872 [2024-10-09 08:03:12.879515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:10.872 [2024-10-09 08:03:12.879527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:10.872 [2024-10-09 08:03:12.879541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.872 [2024-10-09 08:03:12.879555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:10.872 [2024-10-09 08:03:12.879585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:23:10.872 [2024-10-09 08:03:12.879599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:12.898399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.130 [2024-10-09 08:03:12.898450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.130 [2024-10-09 08:03:12.898469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.725 ms 00:23:11.130 [2024-10-09 08:03:12.898481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:12.898925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.130 [2024-10-09 08:03:12.898947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.130 [2024-10-09 08:03:12.898971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:23:11.130 [2024-10-09 08:03:12.898983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:12.936155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:12.936224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.130 [2024-10-09 08:03:12.936243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:12.936255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:12.936354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:12.936372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.130 [2024-10-09 08:03:12.936394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:12.936405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:12.936517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:12.936536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.130 [2024-10-09 08:03:12.936549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:12.936561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:12.936584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:12.936598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.130 [2024-10-09 08:03:12.936609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:12.936628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.040795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.040869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.130 [2024-10-09 08:03:13.040888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.040901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.127054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.127132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.130 [2024-10-09 08:03:13.127160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.127173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.127280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.127298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.130 [2024-10-09 08:03:13.127311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.127323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.127416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.127435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.130 [2024-10-09 08:03:13.127447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.127467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.127600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.127620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.130 [2024-10-09 08:03:13.127634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.127645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.127706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.127725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.130 [2024-10-09 08:03:13.127738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.127750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.127827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.127849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.130 [2024-10-09 08:03:13.127862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.127872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.127930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.130 [2024-10-09 08:03:13.127947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.130 [2024-10-09 08:03:13.127959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.130 [2024-10-09 08:03:13.127971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.130 [2024-10-09 08:03:13.128120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 578.610 ms, result 0 00:23:12.504 00:23:12.504 00:23:12.504 08:03:14 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.034 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:15.034 Process with pid 76877 is not found 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76877 00:23:15.034 08:03:16 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76877 ']' 00:23:15.034 08:03:16 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76877 00:23:15.034 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76877) - No such process 00:23:15.034 08:03:16 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76877 is not found' 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:15.034 Remove shared memory files 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:15.034 08:03:16 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:15.034 ************************************ 00:23:15.034 END TEST ftl_restore 00:23:15.035 ************************************ 00:23:15.035 00:23:15.035 real 3m14.619s 00:23:15.035 user 2m59.541s 00:23:15.035 sys 0m17.533s 00:23:15.035 08:03:16 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.035 08:03:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:15.035 08:03:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:15.035 08:03:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:15.035 08:03:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.035 08:03:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:15.035 ************************************ 00:23:15.035 START TEST ftl_dirty_shutdown 00:23:15.035 ************************************ 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:15.035 * Looking for test storage... 00:23:15.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.035 --rc genhtml_branch_coverage=1 00:23:15.035 --rc genhtml_function_coverage=1 00:23:15.035 --rc genhtml_legend=1 00:23:15.035 --rc geninfo_all_blocks=1 00:23:15.035 --rc geninfo_unexecuted_blocks=1 00:23:15.035 00:23:15.035 ' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.035 --rc genhtml_branch_coverage=1 00:23:15.035 --rc genhtml_function_coverage=1 00:23:15.035 --rc genhtml_legend=1 00:23:15.035 --rc geninfo_all_blocks=1 00:23:15.035 --rc geninfo_unexecuted_blocks=1 00:23:15.035 00:23:15.035 ' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.035 --rc genhtml_branch_coverage=1 00:23:15.035 --rc genhtml_function_coverage=1 00:23:15.035 --rc genhtml_legend=1 00:23:15.035 --rc geninfo_all_blocks=1 00:23:15.035 --rc geninfo_unexecuted_blocks=1 00:23:15.035 00:23:15.035 ' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.035 --rc genhtml_branch_coverage=1 00:23:15.035 --rc genhtml_function_coverage=1 00:23:15.035 --rc genhtml_legend=1 00:23:15.035 --rc geninfo_all_blocks=1 00:23:15.035 --rc geninfo_unexecuted_blocks=1 00:23:15.035 00:23:15.035 ' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78922 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78922 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78922 ']' 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.035 08:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.294 [2024-10-09 08:03:17.059072] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:23:15.294 [2024-10-09 08:03:17.059501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78922 ] 00:23:15.294 [2024-10-09 08:03:17.230819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.552 [2024-10-09 08:03:17.428858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.486 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.486 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:16.486 08:03:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:16.487 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:16.487 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:16.487 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:16.487 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:16.487 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:16.745 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:17.003 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:17.004 { 00:23:17.004 "name": "nvme0n1", 00:23:17.004 "aliases": [ 00:23:17.004 "b5334370-a4ad-4b93-a65c-ee93a5f1390f" 00:23:17.004 ], 00:23:17.004 "product_name": "NVMe disk", 00:23:17.004 "block_size": 4096, 00:23:17.004 "num_blocks": 1310720, 00:23:17.004 "uuid": "b5334370-a4ad-4b93-a65c-ee93a5f1390f", 00:23:17.004 "numa_id": -1, 00:23:17.004 "assigned_rate_limits": { 00:23:17.004 "rw_ios_per_sec": 0, 00:23:17.004 "rw_mbytes_per_sec": 0, 00:23:17.004 "r_mbytes_per_sec": 0, 00:23:17.004 "w_mbytes_per_sec": 0 00:23:17.004 }, 00:23:17.004 "claimed": true, 00:23:17.004 "claim_type": "read_many_write_one", 00:23:17.004 "zoned": false, 00:23:17.004 "supported_io_types": { 00:23:17.004 "read": true, 00:23:17.004 "write": true, 00:23:17.004 "unmap": true, 00:23:17.004 "flush": true, 00:23:17.004 "reset": true, 00:23:17.004 "nvme_admin": true, 00:23:17.004 "nvme_io": true, 00:23:17.004 "nvme_io_md": false, 00:23:17.004 "write_zeroes": true, 00:23:17.004 "zcopy": false, 00:23:17.004 "get_zone_info": false, 00:23:17.004 "zone_management": false, 00:23:17.004 "zone_append": false, 00:23:17.004 "compare": true, 00:23:17.004 "compare_and_write": false, 00:23:17.004 "abort": true, 00:23:17.004 "seek_hole": false, 00:23:17.004 "seek_data": false, 00:23:17.004 "copy": true, 00:23:17.004 "nvme_iov_md": false 00:23:17.004 }, 00:23:17.004 "driver_specific": { 00:23:17.004 "nvme": [ 00:23:17.004 { 00:23:17.004 "pci_address": "0000:00:11.0", 00:23:17.004 "trid": { 00:23:17.004 "trtype": "PCIe", 00:23:17.004 "traddr": "0000:00:11.0" 00:23:17.004 }, 00:23:17.004 "ctrlr_data": { 00:23:17.004 "cntlid": 0, 00:23:17.004 "vendor_id": "0x1b36", 00:23:17.004 "model_number": "QEMU NVMe Ctrl", 00:23:17.004 "serial_number": "12341", 00:23:17.004 "firmware_revision": "8.0.0", 00:23:17.004 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:17.004 "oacs": { 00:23:17.004 "security": 0, 00:23:17.004 "format": 1, 00:23:17.004 "firmware": 0, 00:23:17.004 "ns_manage": 1 00:23:17.004 }, 00:23:17.004 "multi_ctrlr": false, 00:23:17.004 "ana_reporting": false 00:23:17.004 }, 00:23:17.004 "vs": { 00:23:17.004 "nvme_version": "1.4" 00:23:17.004 }, 00:23:17.004 "ns_data": { 00:23:17.004 "id": 1, 00:23:17.004 "can_share": false 00:23:17.004 } 00:23:17.004 } 00:23:17.004 ], 00:23:17.004 "mp_policy": "active_passive" 00:23:17.004 } 00:23:17.004 } 00:23:17.004 ]' 00:23:17.004 08:03:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:17.263 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:17.522 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=4feb386d-6833-4f0a-b42e-129da1e3bb25 00:23:17.522 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:17.522 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4feb386d-6833-4f0a-b42e-129da1e3bb25 00:23:17.781 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:18.039 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=6b6005ad-9a72-4ab8-bc50-9aef5c064f4a 00:23:18.039 08:03:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6b6005ad-9a72-4ab8-bc50-9aef5c064f4a 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:18.605 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:18.863 { 00:23:18.863 "name": "eac8b458-19a2-43d6-9ab6-a99e8f51c61e", 00:23:18.863 "aliases": [ 00:23:18.863 "lvs/nvme0n1p0" 00:23:18.863 ], 00:23:18.863 "product_name": "Logical Volume", 00:23:18.863 "block_size": 4096, 00:23:18.863 "num_blocks": 26476544, 00:23:18.863 "uuid": "eac8b458-19a2-43d6-9ab6-a99e8f51c61e", 00:23:18.863 "assigned_rate_limits": { 00:23:18.863 "rw_ios_per_sec": 0, 00:23:18.863 "rw_mbytes_per_sec": 0, 00:23:18.863 "r_mbytes_per_sec": 0, 00:23:18.863 "w_mbytes_per_sec": 0 00:23:18.863 }, 00:23:18.863 "claimed": false, 00:23:18.863 "zoned": false, 00:23:18.863 "supported_io_types": { 00:23:18.863 "read": true, 00:23:18.863 "write": true, 00:23:18.863 "unmap": true, 00:23:18.863 "flush": false, 00:23:18.863 "reset": true, 00:23:18.863 "nvme_admin": false, 00:23:18.863 "nvme_io": false, 00:23:18.863 "nvme_io_md": false, 00:23:18.863 "write_zeroes": true, 00:23:18.863 "zcopy": false, 00:23:18.863 "get_zone_info": false, 00:23:18.863 "zone_management": false, 00:23:18.863 "zone_append": false, 00:23:18.863 "compare": false, 00:23:18.863 "compare_and_write": false, 00:23:18.863 "abort": false, 00:23:18.863 "seek_hole": true, 00:23:18.863 "seek_data": true, 00:23:18.863 "copy": false, 00:23:18.863 "nvme_iov_md": false 00:23:18.863 }, 00:23:18.863 "driver_specific": { 00:23:18.863 "lvol": { 00:23:18.863 "lvol_store_uuid": "6b6005ad-9a72-4ab8-bc50-9aef5c064f4a", 00:23:18.863 "base_bdev": "nvme0n1", 00:23:18.863 "thin_provision": true, 00:23:18.863 "num_allocated_clusters": 0, 00:23:18.863 "snapshot": false, 00:23:18.863 "clone": false, 00:23:18.863 "esnap_clone": false 00:23:18.863 } 00:23:18.863 } 00:23:18.863 } 00:23:18.863 ]' 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:18.863 08:03:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:19.121 08:03:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:19.121 08:03:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:19.122 08:03:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:19.122 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:19.122 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:19.122 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:19.122 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:19.122 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:19.687 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:19.687 { 00:23:19.687 "name": "eac8b458-19a2-43d6-9ab6-a99e8f51c61e", 00:23:19.688 "aliases": [ 00:23:19.688 "lvs/nvme0n1p0" 00:23:19.688 ], 00:23:19.688 "product_name": "Logical Volume", 00:23:19.688 "block_size": 4096, 00:23:19.688 "num_blocks": 26476544, 00:23:19.688 "uuid": "eac8b458-19a2-43d6-9ab6-a99e8f51c61e", 00:23:19.688 "assigned_rate_limits": { 00:23:19.688 "rw_ios_per_sec": 0, 00:23:19.688 "rw_mbytes_per_sec": 0, 00:23:19.688 "r_mbytes_per_sec": 0, 00:23:19.688 "w_mbytes_per_sec": 0 00:23:19.688 }, 00:23:19.688 "claimed": false, 00:23:19.688 "zoned": false, 00:23:19.688 "supported_io_types": { 00:23:19.688 "read": true, 00:23:19.688 "write": true, 00:23:19.688 "unmap": true, 00:23:19.688 "flush": false, 00:23:19.688 "reset": true, 00:23:19.688 "nvme_admin": false, 00:23:19.688 "nvme_io": false, 00:23:19.688 "nvme_io_md": false, 00:23:19.688 "write_zeroes": true, 00:23:19.688 "zcopy": false, 00:23:19.688 "get_zone_info": false, 00:23:19.688 "zone_management": false, 00:23:19.688 "zone_append": false, 00:23:19.688 "compare": false, 00:23:19.688 "compare_and_write": false, 00:23:19.688 "abort": false, 00:23:19.688 "seek_hole": true, 00:23:19.688 "seek_data": true, 00:23:19.688 "copy": false, 00:23:19.688 "nvme_iov_md": false 00:23:19.688 }, 00:23:19.688 "driver_specific": { 00:23:19.688 "lvol": { 00:23:19.688 "lvol_store_uuid": "6b6005ad-9a72-4ab8-bc50-9aef5c064f4a", 00:23:19.688 "base_bdev": "nvme0n1", 00:23:19.688 "thin_provision": true, 00:23:19.688 "num_allocated_clusters": 0, 00:23:19.688 "snapshot": false, 00:23:19.688 "clone": false, 00:23:19.688 "esnap_clone": false 00:23:19.688 } 00:23:19.688 } 00:23:19.688 } 00:23:19.688 ]' 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:19.688 08:03:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:19.946 08:03:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:19.946 08:03:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:19.946 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:19.946 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:19.946 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:19.946 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:19.946 08:03:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eac8b458-19a2-43d6-9ab6-a99e8f51c61e 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:20.204 { 00:23:20.204 "name": "eac8b458-19a2-43d6-9ab6-a99e8f51c61e", 00:23:20.204 "aliases": [ 00:23:20.204 "lvs/nvme0n1p0" 00:23:20.204 ], 00:23:20.204 "product_name": "Logical Volume", 00:23:20.204 "block_size": 4096, 00:23:20.204 "num_blocks": 26476544, 00:23:20.204 "uuid": "eac8b458-19a2-43d6-9ab6-a99e8f51c61e", 00:23:20.204 "assigned_rate_limits": { 00:23:20.204 "rw_ios_per_sec": 0, 00:23:20.204 "rw_mbytes_per_sec": 0, 00:23:20.204 "r_mbytes_per_sec": 0, 00:23:20.204 "w_mbytes_per_sec": 0 00:23:20.204 }, 00:23:20.204 "claimed": false, 00:23:20.204 "zoned": false, 00:23:20.204 "supported_io_types": { 00:23:20.204 "read": true, 00:23:20.204 "write": true, 00:23:20.204 "unmap": true, 00:23:20.204 "flush": false, 00:23:20.204 "reset": true, 00:23:20.204 "nvme_admin": false, 00:23:20.204 "nvme_io": false, 00:23:20.204 "nvme_io_md": false, 00:23:20.204 "write_zeroes": true, 00:23:20.204 "zcopy": false, 00:23:20.204 "get_zone_info": false, 00:23:20.204 "zone_management": false, 00:23:20.204 "zone_append": false, 00:23:20.204 "compare": false, 00:23:20.204 "compare_and_write": false, 00:23:20.204 "abort": false, 00:23:20.204 "seek_hole": true, 00:23:20.204 "seek_data": true, 00:23:20.204 "copy": false, 00:23:20.204 "nvme_iov_md": false 00:23:20.204 }, 00:23:20.204 "driver_specific": { 00:23:20.204 "lvol": { 00:23:20.204 "lvol_store_uuid": "6b6005ad-9a72-4ab8-bc50-9aef5c064f4a", 00:23:20.204 "base_bdev": "nvme0n1", 00:23:20.204 "thin_provision": true, 00:23:20.204 "num_allocated_clusters": 0, 00:23:20.204 "snapshot": false, 00:23:20.204 "clone": false, 00:23:20.204 "esnap_clone": false 00:23:20.204 } 00:23:20.204 } 00:23:20.204 } 00:23:20.204 ]' 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d eac8b458-19a2-43d6-9ab6-a99e8f51c61e --l2p_dram_limit 10' 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:20.204 08:03:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d eac8b458-19a2-43d6-9ab6-a99e8f51c61e --l2p_dram_limit 10 -c nvc0n1p0 00:23:20.463 [2024-10-09 08:03:22.445234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.445318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:20.463 [2024-10-09 08:03:22.445363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:20.463 [2024-10-09 08:03:22.445379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.445475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.445495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.463 [2024-10-09 08:03:22.445511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:20.463 [2024-10-09 08:03:22.445524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.445568] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:20.463 [2024-10-09 08:03:22.446567] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:20.463 [2024-10-09 08:03:22.446612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.446628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.463 [2024-10-09 08:03:22.446644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:23:20.463 [2024-10-09 08:03:22.446660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.446796] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 79fb1636-bf1a-4768-a170-c1f22467d828 00:23:20.463 [2024-10-09 08:03:22.447899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.448099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:20.463 [2024-10-09 08:03:22.448129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:20.463 [2024-10-09 08:03:22.448146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.453002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.453061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.463 [2024-10-09 08:03:22.453081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.785 ms 00:23:20.463 [2024-10-09 08:03:22.453096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.453231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.453256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.463 [2024-10-09 08:03:22.453271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:20.463 [2024-10-09 08:03:22.453291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.453384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.453410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:20.463 [2024-10-09 08:03:22.453424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:20.463 [2024-10-09 08:03:22.453439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.453491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:20.463 [2024-10-09 08:03:22.458187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.458247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.463 [2024-10-09 08:03:22.458270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:23:20.463 [2024-10-09 08:03:22.458284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.458376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.458408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:20.463 [2024-10-09 08:03:22.458426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:20.463 [2024-10-09 08:03:22.458441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.458507] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:20.463 [2024-10-09 08:03:22.458667] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:20.463 [2024-10-09 08:03:22.458700] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:20.463 [2024-10-09 08:03:22.458717] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:20.463 [2024-10-09 08:03:22.458738] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:20.463 [2024-10-09 08:03:22.458753] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:20.463 [2024-10-09 08:03:22.458769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:20.463 [2024-10-09 08:03:22.458781] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:20.463 [2024-10-09 08:03:22.458795] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:20.463 [2024-10-09 08:03:22.458807] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:20.463 [2024-10-09 08:03:22.458822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.458847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:20.463 [2024-10-09 08:03:22.458863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:23:20.463 [2024-10-09 08:03:22.458876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.458976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.463 [2024-10-09 08:03:22.458998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:20.463 [2024-10-09 08:03:22.459013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:20.463 [2024-10-09 08:03:22.459025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.463 [2024-10-09 08:03:22.459138] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:20.463 [2024-10-09 08:03:22.459154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:20.464 [2024-10-09 08:03:22.459169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:20.464 [2024-10-09 08:03:22.459208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:20.464 [2024-10-09 08:03:22.459247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:20.464 [2024-10-09 08:03:22.459272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:20.464 [2024-10-09 08:03:22.459284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:20.464 [2024-10-09 08:03:22.459297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:20.464 [2024-10-09 08:03:22.459309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:20.464 [2024-10-09 08:03:22.459322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:20.464 [2024-10-09 08:03:22.459360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:20.464 [2024-10-09 08:03:22.459392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:20.464 [2024-10-09 08:03:22.459433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:20.464 [2024-10-09 08:03:22.459491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:20.464 [2024-10-09 08:03:22.459538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:20.464 [2024-10-09 08:03:22.459575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:20.464 [2024-10-09 08:03:22.459614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:20.464 [2024-10-09 08:03:22.459639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:20.464 [2024-10-09 08:03:22.459650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:20.464 [2024-10-09 08:03:22.459676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:20.464 [2024-10-09 08:03:22.459690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:20.464 [2024-10-09 08:03:22.459704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:20.464 [2024-10-09 08:03:22.459716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:20.464 [2024-10-09 08:03:22.459740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:20.464 [2024-10-09 08:03:22.459754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:20.464 [2024-10-09 08:03:22.459779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:20.464 [2024-10-09 08:03:22.459794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.464 [2024-10-09 08:03:22.459820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:20.464 [2024-10-09 08:03:22.459838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:20.464 [2024-10-09 08:03:22.459850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:20.464 [2024-10-09 08:03:22.459864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:20.464 [2024-10-09 08:03:22.459875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:20.464 [2024-10-09 08:03:22.459890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:20.464 [2024-10-09 08:03:22.459907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:20.464 [2024-10-09 08:03:22.459924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:20.464 [2024-10-09 08:03:22.459938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:20.464 [2024-10-09 08:03:22.459953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:20.464 [2024-10-09 08:03:22.459965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:20.464 [2024-10-09 08:03:22.459979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:20.464 [2024-10-09 08:03:22.459991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:20.464 [2024-10-09 08:03:22.460005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:20.464 [2024-10-09 08:03:22.460018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:20.464 [2024-10-09 08:03:22.460032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:20.464 [2024-10-09 08:03:22.460045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:20.464 [2024-10-09 08:03:22.460061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:20.464 [2024-10-09 08:03:22.460072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:20.464 [2024-10-09 08:03:22.460087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:20.464 [2024-10-09 08:03:22.460099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:20.464 [2024-10-09 08:03:22.460113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:20.464 [2024-10-09 08:03:22.460125] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:20.464 [2024-10-09 08:03:22.460140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:20.464 [2024-10-09 08:03:22.460154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:20.464 [2024-10-09 08:03:22.460182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:20.464 [2024-10-09 08:03:22.460209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:20.464 [2024-10-09 08:03:22.460224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:20.464 [2024-10-09 08:03:22.460238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.464 [2024-10-09 08:03:22.460254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:20.464 [2024-10-09 08:03:22.460267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:23:20.464 [2024-10-09 08:03:22.460281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.464 [2024-10-09 08:03:22.460355] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:20.464 [2024-10-09 08:03:22.460388] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:22.990 [2024-10-09 08:03:24.445246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.990 [2024-10-09 08:03:24.445374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:22.990 [2024-10-09 08:03:24.445417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1984.901 ms 00:23:22.990 [2024-10-09 08:03:24.445444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.990 [2024-10-09 08:03:24.478764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.990 [2024-10-09 08:03:24.478837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.990 [2024-10-09 08:03:24.478872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.922 ms 00:23:22.990 [2024-10-09 08:03:24.478888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.990 [2024-10-09 08:03:24.479084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.990 [2024-10-09 08:03:24.479110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:22.990 [2024-10-09 08:03:24.479125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:22.990 [2024-10-09 08:03:24.479143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.990 [2024-10-09 08:03:24.531169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.990 [2024-10-09 08:03:24.531258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.990 [2024-10-09 08:03:24.531295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.960 ms 00:23:22.990 [2024-10-09 08:03:24.531317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.990 [2024-10-09 08:03:24.531424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.990 [2024-10-09 08:03:24.531456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.990 [2024-10-09 08:03:24.531476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:22.990 [2024-10-09 08:03:24.531512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.990 [2024-10-09 08:03:24.532030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.990 [2024-10-09 08:03:24.532076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.990 [2024-10-09 08:03:24.532097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:23:22.990 [2024-10-09 08:03:24.532122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.532311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.532357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.991 [2024-10-09 08:03:24.532379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:23:22.991 [2024-10-09 08:03:24.532402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.551957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.552022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.991 [2024-10-09 08:03:24.552043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.517 ms 00:23:22.991 [2024-10-09 08:03:24.552059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.565910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:22.991 [2024-10-09 08:03:24.568905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.568957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:22.991 [2024-10-09 08:03:24.568980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.700 ms 00:23:22.991 [2024-10-09 08:03:24.568997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.624438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.624735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:22.991 [2024-10-09 08:03:24.624780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.373 ms 00:23:22.991 [2024-10-09 08:03:24.624795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.625028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.625061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:22.991 [2024-10-09 08:03:24.625081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:23:22.991 [2024-10-09 08:03:24.625094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.657019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.657071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:22.991 [2024-10-09 08:03:24.657094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.845 ms 00:23:22.991 [2024-10-09 08:03:24.657108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.688816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.688869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:22.991 [2024-10-09 08:03:24.688894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.635 ms 00:23:22.991 [2024-10-09 08:03:24.688907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.689676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.689715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:22.991 [2024-10-09 08:03:24.689736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:23:22.991 [2024-10-09 08:03:24.689750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.772962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.773033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:22.991 [2024-10-09 08:03:24.773063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.130 ms 00:23:22.991 [2024-10-09 08:03:24.773082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.811691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.812025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:22.991 [2024-10-09 08:03:24.812085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.449 ms 00:23:22.991 [2024-10-09 08:03:24.812116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.851359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.851437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:22.991 [2024-10-09 08:03:24.851464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.127 ms 00:23:22.991 [2024-10-09 08:03:24.851477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.883151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.883377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:22.991 [2024-10-09 08:03:24.883415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.607 ms 00:23:22.991 [2024-10-09 08:03:24.883431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.883498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.883518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:22.991 [2024-10-09 08:03:24.883538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:22.991 [2024-10-09 08:03:24.883554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.883697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.991 [2024-10-09 08:03:24.883723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:22.991 [2024-10-09 08:03:24.883740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:22.991 [2024-10-09 08:03:24.883753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.991 [2024-10-09 08:03:24.884832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2439.088 ms, result 0 00:23:22.991 { 00:23:22.991 "name": "ftl0", 00:23:22.991 "uuid": "79fb1636-bf1a-4768-a170-c1f22467d828" 00:23:22.991 } 00:23:22.991 08:03:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:22.991 08:03:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:23.556 08:03:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:23.556 08:03:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:23.556 08:03:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:23.815 /dev/nbd0 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:23.815 1+0 records in 00:23:23.815 1+0 records out 00:23:23.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274574 s, 14.9 MB/s 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:23.815 08:03:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:23.815 [2024-10-09 08:03:25.756821] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:23:23.815 [2024-10-09 08:03:25.757071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79070 ] 00:23:24.073 [2024-10-09 08:03:25.934718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.330 [2024-10-09 08:03:26.177986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.703  [2024-10-09T08:03:28.648Z] Copying: 159/1024 [MB] (159 MBps) [2024-10-09T08:03:29.583Z] Copying: 327/1024 [MB] (167 MBps) [2024-10-09T08:03:30.519Z] Copying: 494/1024 [MB] (167 MBps) [2024-10-09T08:03:31.895Z] Copying: 662/1024 [MB] (167 MBps) [2024-10-09T08:03:32.829Z] Copying: 826/1024 [MB] (163 MBps) [2024-10-09T08:03:32.829Z] Copying: 975/1024 [MB] (148 MBps) [2024-10-09T08:03:34.205Z] Copying: 1024/1024 [MB] (average 162 MBps) 00:23:32.193 00:23:32.193 08:03:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:34.718 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:34.718 [2024-10-09 08:03:36.346783] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:23:34.718 [2024-10-09 08:03:36.346937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79174 ] 00:23:34.718 [2024-10-09 08:03:36.516707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.977 [2024-10-09 08:03:36.729598] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.381  [2024-10-09T08:03:39.328Z] Copying: 16/1024 [MB] (16 MBps) [2024-10-09T08:03:40.263Z] Copying: 32/1024 [MB] (16 MBps) [2024-10-09T08:03:41.197Z] Copying: 49/1024 [MB] (16 MBps) [2024-10-09T08:03:42.134Z] Copying: 65/1024 [MB] (16 MBps) [2024-10-09T08:03:43.085Z] Copying: 82/1024 [MB] (16 MBps) [2024-10-09T08:03:44.049Z] Copying: 99/1024 [MB] (17 MBps) [2024-10-09T08:03:45.423Z] Copying: 117/1024 [MB] (17 MBps) [2024-10-09T08:03:46.357Z] Copying: 135/1024 [MB] (18 MBps) [2024-10-09T08:03:47.291Z] Copying: 153/1024 [MB] (17 MBps) [2024-10-09T08:03:48.226Z] Copying: 171/1024 [MB] (18 MBps) [2024-10-09T08:03:49.160Z] Copying: 188/1024 [MB] (17 MBps) [2024-10-09T08:03:50.094Z] Copying: 205/1024 [MB] (16 MBps) [2024-10-09T08:03:51.028Z] Copying: 221/1024 [MB] (15 MBps) [2024-10-09T08:03:52.403Z] Copying: 237/1024 [MB] (15 MBps) [2024-10-09T08:03:53.339Z] Copying: 253/1024 [MB] (15 MBps) [2024-10-09T08:03:54.273Z] Copying: 266/1024 [MB] (12 MBps) [2024-10-09T08:03:55.206Z] Copying: 282/1024 [MB] (16 MBps) [2024-10-09T08:03:56.138Z] Copying: 299/1024 [MB] (17 MBps) [2024-10-09T08:03:57.070Z] Copying: 317/1024 [MB] (17 MBps) [2024-10-09T08:03:58.444Z] Copying: 334/1024 [MB] (16 MBps) [2024-10-09T08:03:59.010Z] Copying: 350/1024 [MB] (16 MBps) [2024-10-09T08:04:00.385Z] Copying: 366/1024 [MB] (16 MBps) [2024-10-09T08:04:01.320Z] Copying: 383/1024 [MB] (16 MBps) [2024-10-09T08:04:02.255Z] Copying: 399/1024 [MB] (16 MBps) [2024-10-09T08:04:03.189Z] Copying: 415/1024 [MB] (15 MBps) [2024-10-09T08:04:04.123Z] Copying: 433/1024 [MB] (18 MBps) [2024-10-09T08:04:05.057Z] Copying: 450/1024 [MB] (16 MBps) [2024-10-09T08:04:06.439Z] Copying: 468/1024 [MB] (17 MBps) [2024-10-09T08:04:07.373Z] Copying: 485/1024 [MB] (17 MBps) [2024-10-09T08:04:08.308Z] Copying: 502/1024 [MB] (16 MBps) [2024-10-09T08:04:09.256Z] Copying: 519/1024 [MB] (17 MBps) [2024-10-09T08:04:10.203Z] Copying: 535/1024 [MB] (15 MBps) [2024-10-09T08:04:11.137Z] Copying: 552/1024 [MB] (16 MBps) [2024-10-09T08:04:12.073Z] Copying: 568/1024 [MB] (15 MBps) [2024-10-09T08:04:13.007Z] Copying: 583/1024 [MB] (15 MBps) [2024-10-09T08:04:14.394Z] Copying: 599/1024 [MB] (15 MBps) [2024-10-09T08:04:15.330Z] Copying: 614/1024 [MB] (15 MBps) [2024-10-09T08:04:16.264Z] Copying: 630/1024 [MB] (15 MBps) [2024-10-09T08:04:17.198Z] Copying: 646/1024 [MB] (15 MBps) [2024-10-09T08:04:18.165Z] Copying: 662/1024 [MB] (16 MBps) [2024-10-09T08:04:19.099Z] Copying: 678/1024 [MB] (15 MBps) [2024-10-09T08:04:20.033Z] Copying: 694/1024 [MB] (15 MBps) [2024-10-09T08:04:21.408Z] Copying: 709/1024 [MB] (15 MBps) [2024-10-09T08:04:22.342Z] Copying: 725/1024 [MB] (16 MBps) [2024-10-09T08:04:23.277Z] Copying: 743/1024 [MB] (17 MBps) [2024-10-09T08:04:24.212Z] Copying: 758/1024 [MB] (15 MBps) [2024-10-09T08:04:25.250Z] Copying: 773/1024 [MB] (15 MBps) [2024-10-09T08:04:26.186Z] Copying: 789/1024 [MB] (15 MBps) [2024-10-09T08:04:27.120Z] Copying: 805/1024 [MB] (15 MBps) [2024-10-09T08:04:28.055Z] Copying: 821/1024 [MB] (16 MBps) [2024-10-09T08:04:29.028Z] Copying: 841808/1048576 [kB] (464 kBps) [2024-10-09T08:04:30.404Z] Copying: 851360/1048576 [kB] (9552 kBps) [2024-10-09T08:04:31.339Z] Copying: 847/1024 [MB] (16 MBps) [2024-10-09T08:04:32.316Z] Copying: 863/1024 [MB] (15 MBps) [2024-10-09T08:04:33.251Z] Copying: 878/1024 [MB] (15 MBps) [2024-10-09T08:04:34.186Z] Copying: 894/1024 [MB] (15 MBps) [2024-10-09T08:04:35.121Z] Copying: 911/1024 [MB] (16 MBps) [2024-10-09T08:04:36.057Z] Copying: 927/1024 [MB] (16 MBps) [2024-10-09T08:04:37.431Z] Copying: 943/1024 [MB] (16 MBps) [2024-10-09T08:04:38.367Z] Copying: 960/1024 [MB] (16 MBps) [2024-10-09T08:04:39.303Z] Copying: 976/1024 [MB] (16 MBps) [2024-10-09T08:04:40.238Z] Copying: 993/1024 [MB] (16 MBps) [2024-10-09T08:04:41.174Z] Copying: 1009/1024 [MB] (16 MBps) [2024-10-09T08:04:42.110Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:24:40.098 00:24:40.098 08:04:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:40.098 08:04:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:40.357 08:04:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:40.925 [2024-10-09 08:04:42.647221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.647297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:40.925 [2024-10-09 08:04:42.647337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:40.925 [2024-10-09 08:04:42.647388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.647430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.925 [2024-10-09 08:04:42.650991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.651173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:40.925 [2024-10-09 08:04:42.651209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.516 ms 00:24:40.925 [2024-10-09 08:04:42.651223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.653037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.653077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:40.925 [2024-10-09 08:04:42.653102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:24:40.925 [2024-10-09 08:04:42.653114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.669802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.669864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:40.925 [2024-10-09 08:04:42.669910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.656 ms 00:24:40.925 [2024-10-09 08:04:42.669923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.676970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.677007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:40.925 [2024-10-09 08:04:42.677031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.998 ms 00:24:40.925 [2024-10-09 08:04:42.677043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.707569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.707617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:40.925 [2024-10-09 08:04:42.707654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.421 ms 00:24:40.925 [2024-10-09 08:04:42.707665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.726577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.726631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:40.925 [2024-10-09 08:04:42.726653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.813 ms 00:24:40.925 [2024-10-09 08:04:42.726665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.726852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.726873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:40.925 [2024-10-09 08:04:42.726888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:40.925 [2024-10-09 08:04:42.726902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.925 [2024-10-09 08:04:42.756963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.925 [2024-10-09 08:04:42.757006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:40.925 [2024-10-09 08:04:42.757042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.028 ms 00:24:40.926 [2024-10-09 08:04:42.757053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.926 [2024-10-09 08:04:42.786346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.926 [2024-10-09 08:04:42.786416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:40.926 [2024-10-09 08:04:42.786454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.240 ms 00:24:40.926 [2024-10-09 08:04:42.786465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.926 [2024-10-09 08:04:42.817489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.926 [2024-10-09 08:04:42.817543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:40.926 [2024-10-09 08:04:42.817566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.969 ms 00:24:40.926 [2024-10-09 08:04:42.817579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.926 [2024-10-09 08:04:42.849826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.926 [2024-10-09 08:04:42.850081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:40.926 [2024-10-09 08:04:42.850121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.109 ms 00:24:40.926 [2024-10-09 08:04:42.850137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.926 [2024-10-09 08:04:42.850208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:40.926 [2024-10-09 08:04:42.850235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.850998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:40.926 [2024-10-09 08:04:42.851138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:40.927 [2024-10-09 08:04:42.851744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:40.927 [2024-10-09 08:04:42.851763] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 79fb1636-bf1a-4768-a170-c1f22467d828 00:24:40.927 [2024-10-09 08:04:42.851776] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:40.927 [2024-10-09 08:04:42.851792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:40.927 [2024-10-09 08:04:42.851804] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:40.927 [2024-10-09 08:04:42.851818] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:40.927 [2024-10-09 08:04:42.851829] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:40.927 [2024-10-09 08:04:42.851844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:40.927 [2024-10-09 08:04:42.851855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:40.927 [2024-10-09 08:04:42.851877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:40.927 [2024-10-09 08:04:42.851888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:40.927 [2024-10-09 08:04:42.851904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.927 [2024-10-09 08:04:42.851917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:40.927 [2024-10-09 08:04:42.851932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:24:40.927 [2024-10-09 08:04:42.851944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.927 [2024-10-09 08:04:42.869302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.927 [2024-10-09 08:04:42.869366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:40.927 [2024-10-09 08:04:42.869390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.267 ms 00:24:40.927 [2024-10-09 08:04:42.869403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.927 [2024-10-09 08:04:42.869860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.927 [2024-10-09 08:04:42.869893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:40.927 [2024-10-09 08:04:42.869913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:24:40.927 [2024-10-09 08:04:42.869929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.927 [2024-10-09 08:04:42.920314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.927 [2024-10-09 08:04:42.920389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.927 [2024-10-09 08:04:42.920412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.927 [2024-10-09 08:04:42.920425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.927 [2024-10-09 08:04:42.920516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.927 [2024-10-09 08:04:42.920533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.927 [2024-10-09 08:04:42.920548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.927 [2024-10-09 08:04:42.920564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.927 [2024-10-09 08:04:42.920714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.927 [2024-10-09 08:04:42.920736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.927 [2024-10-09 08:04:42.920752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.927 [2024-10-09 08:04:42.920765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.927 [2024-10-09 08:04:42.920798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.927 [2024-10-09 08:04:42.920814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.927 [2024-10-09 08:04:42.920828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.927 [2024-10-09 08:04:42.920840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.024616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.024867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:41.187 [2024-10-09 08:04:43.025006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.025060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.108149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.108379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:41.187 [2024-10-09 08:04:43.108416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.108434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.108574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.108595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:41.187 [2024-10-09 08:04:43.108610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.108622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.108699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.108717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:41.187 [2024-10-09 08:04:43.108747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.108759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.108907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.108927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:41.187 [2024-10-09 08:04:43.108942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.108969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.109023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.109041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:41.187 [2024-10-09 08:04:43.109055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.109081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.109132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.109150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:41.187 [2024-10-09 08:04:43.109164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.109175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.109234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.187 [2024-10-09 08:04:43.109252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:41.187 [2024-10-09 08:04:43.109266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.187 [2024-10-09 08:04:43.109277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.187 [2024-10-09 08:04:43.109485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 462.232 ms, result 0 00:24:41.187 true 00:24:41.187 08:04:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78922 00:24:41.187 08:04:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78922 00:24:41.187 08:04:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:41.446 [2024-10-09 08:04:43.247536] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:24:41.446 [2024-10-09 08:04:43.247807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79839 ] 00:24:41.446 [2024-10-09 08:04:43.424471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.704 [2024-10-09 08:04:43.606767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.081  [2024-10-09T08:04:46.028Z] Copying: 167/1024 [MB] (167 MBps) [2024-10-09T08:04:46.963Z] Copying: 326/1024 [MB] (159 MBps) [2024-10-09T08:04:47.899Z] Copying: 485/1024 [MB] (158 MBps) [2024-10-09T08:04:49.274Z] Copying: 653/1024 [MB] (168 MBps) [2024-10-09T08:04:50.209Z] Copying: 817/1024 [MB] (163 MBps) [2024-10-09T08:04:50.209Z] Copying: 981/1024 [MB] (164 MBps) [2024-10-09T08:04:51.583Z] Copying: 1024/1024 [MB] (average 163 MBps) 00:24:49.571 00:24:49.571 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78922 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:49.571 08:04:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:49.571 [2024-10-09 08:04:51.456135] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:24:49.571 [2024-10-09 08:04:51.456313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79922 ] 00:24:49.830 [2024-10-09 08:04:51.630135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.830 [2024-10-09 08:04:51.827288] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.396 [2024-10-09 08:04:52.152670] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.396 [2024-10-09 08:04:52.152904] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.396 [2024-10-09 08:04:52.219412] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:50.396 [2024-10-09 08:04:52.219832] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:50.396 [2024-10-09 08:04:52.220123] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:50.655 [2024-10-09 08:04:52.466290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.466555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:50.655 [2024-10-09 08:04:52.466593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:50.655 [2024-10-09 08:04:52.466608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.466685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.466704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.655 [2024-10-09 08:04:52.466717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:50.655 [2024-10-09 08:04:52.466733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.466766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:50.655 [2024-10-09 08:04:52.467698] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:50.655 [2024-10-09 08:04:52.467726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.467743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.655 [2024-10-09 08:04:52.467756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:24:50.655 [2024-10-09 08:04:52.467766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.468928] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:50.655 [2024-10-09 08:04:52.485699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.485747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:50.655 [2024-10-09 08:04:52.485765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.773 ms 00:24:50.655 [2024-10-09 08:04:52.485777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.485851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.485873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:50.655 [2024-10-09 08:04:52.485891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:50.655 [2024-10-09 08:04:52.485902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.490479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.490529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.655 [2024-10-09 08:04:52.490547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.479 ms 00:24:50.655 [2024-10-09 08:04:52.490558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.490662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.490683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.655 [2024-10-09 08:04:52.490696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:50.655 [2024-10-09 08:04:52.490707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.490768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.490786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:50.655 [2024-10-09 08:04:52.490799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:50.655 [2024-10-09 08:04:52.490811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.490847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:50.655 [2024-10-09 08:04:52.495156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.495202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.655 [2024-10-09 08:04:52.495218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.320 ms 00:24:50.655 [2024-10-09 08:04:52.495229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.495274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.495289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:50.655 [2024-10-09 08:04:52.495301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:50.655 [2024-10-09 08:04:52.495313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.495380] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:50.655 [2024-10-09 08:04:52.495415] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:50.655 [2024-10-09 08:04:52.495459] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:50.655 [2024-10-09 08:04:52.495483] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:50.655 [2024-10-09 08:04:52.495602] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:50.655 [2024-10-09 08:04:52.495619] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:50.655 [2024-10-09 08:04:52.495634] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:50.655 [2024-10-09 08:04:52.495648] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:50.655 [2024-10-09 08:04:52.495661] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:50.655 [2024-10-09 08:04:52.495684] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:50.655 [2024-10-09 08:04:52.495697] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:50.655 [2024-10-09 08:04:52.495708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:50.655 [2024-10-09 08:04:52.495719] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:50.655 [2024-10-09 08:04:52.495730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.495748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:50.655 [2024-10-09 08:04:52.495761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:24:50.655 [2024-10-09 08:04:52.495771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.495878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.655 [2024-10-09 08:04:52.495894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:50.655 [2024-10-09 08:04:52.495906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:50.655 [2024-10-09 08:04:52.495916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.655 [2024-10-09 08:04:52.496063] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:50.655 [2024-10-09 08:04:52.496085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:50.655 [2024-10-09 08:04:52.496104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.655 [2024-10-09 08:04:52.496116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.655 [2024-10-09 08:04:52.496127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:50.655 [2024-10-09 08:04:52.496138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:50.655 [2024-10-09 08:04:52.496149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:50.655 [2024-10-09 08:04:52.496159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:50.655 [2024-10-09 08:04:52.496170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:50.655 [2024-10-09 08:04:52.496191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.655 [2024-10-09 08:04:52.496202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:50.655 [2024-10-09 08:04:52.496212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:50.655 [2024-10-09 08:04:52.496222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.655 [2024-10-09 08:04:52.496233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:50.655 [2024-10-09 08:04:52.496243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:50.655 [2024-10-09 08:04:52.496255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.655 [2024-10-09 08:04:52.496265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:50.655 [2024-10-09 08:04:52.496275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:50.655 [2024-10-09 08:04:52.496285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.655 [2024-10-09 08:04:52.496295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:50.655 [2024-10-09 08:04:52.496306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:50.655 [2024-10-09 08:04:52.496316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.655 [2024-10-09 08:04:52.496326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:50.655 [2024-10-09 08:04:52.496356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:50.656 [2024-10-09 08:04:52.496369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.656 [2024-10-09 08:04:52.496379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:50.656 [2024-10-09 08:04:52.496389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:50.656 [2024-10-09 08:04:52.496399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.656 [2024-10-09 08:04:52.496409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:50.656 [2024-10-09 08:04:52.496419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:50.656 [2024-10-09 08:04:52.496429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.656 [2024-10-09 08:04:52.496439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:50.656 [2024-10-09 08:04:52.496449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:50.656 [2024-10-09 08:04:52.496459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.656 [2024-10-09 08:04:52.496469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:50.656 [2024-10-09 08:04:52.496480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:50.656 [2024-10-09 08:04:52.496489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.656 [2024-10-09 08:04:52.496500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:50.656 [2024-10-09 08:04:52.496510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:50.656 [2024-10-09 08:04:52.496520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.656 [2024-10-09 08:04:52.496530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:50.656 [2024-10-09 08:04:52.496540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:50.656 [2024-10-09 08:04:52.496550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.656 [2024-10-09 08:04:52.496560] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:50.656 [2024-10-09 08:04:52.496571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:50.656 [2024-10-09 08:04:52.496582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.656 [2024-10-09 08:04:52.496593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.656 [2024-10-09 08:04:52.496605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:50.656 [2024-10-09 08:04:52.496616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:50.656 [2024-10-09 08:04:52.496626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:50.656 [2024-10-09 08:04:52.496637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:50.656 [2024-10-09 08:04:52.496646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:50.656 [2024-10-09 08:04:52.496656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:50.656 [2024-10-09 08:04:52.496668] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:50.656 [2024-10-09 08:04:52.496682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.656 [2024-10-09 08:04:52.496695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:50.656 [2024-10-09 08:04:52.496706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:50.656 [2024-10-09 08:04:52.496717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:50.656 [2024-10-09 08:04:52.496727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:50.656 [2024-10-09 08:04:52.496738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:50.656 [2024-10-09 08:04:52.496749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:50.656 [2024-10-09 08:04:52.496760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:50.656 [2024-10-09 08:04:52.496771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:50.656 [2024-10-09 08:04:52.496783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:50.656 [2024-10-09 08:04:52.496795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:50.656 [2024-10-09 08:04:52.496806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:50.656 [2024-10-09 08:04:52.496817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:50.656 [2024-10-09 08:04:52.496828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:50.656 [2024-10-09 08:04:52.496839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:50.656 [2024-10-09 08:04:52.496857] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:50.656 [2024-10-09 08:04:52.496869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.656 [2024-10-09 08:04:52.496887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:50.656 [2024-10-09 08:04:52.496899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:50.656 [2024-10-09 08:04:52.496910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:50.656 [2024-10-09 08:04:52.496921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:50.656 [2024-10-09 08:04:52.496933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.496944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:50.656 [2024-10-09 08:04:52.496956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:24:50.656 [2024-10-09 08:04:52.496967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.547281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.547366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.656 [2024-10-09 08:04:52.547390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.248 ms 00:24:50.656 [2024-10-09 08:04:52.547403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.547538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.547555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:50.656 [2024-10-09 08:04:52.547569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:50.656 [2024-10-09 08:04:52.547580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.588077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.588144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.656 [2024-10-09 08:04:52.588164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.378 ms 00:24:50.656 [2024-10-09 08:04:52.588187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.588267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.588285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.656 [2024-10-09 08:04:52.588299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:50.656 [2024-10-09 08:04:52.588310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.588765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.588791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.656 [2024-10-09 08:04:52.588805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:24:50.656 [2024-10-09 08:04:52.588816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.588984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.589005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.656 [2024-10-09 08:04:52.589018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:24:50.656 [2024-10-09 08:04:52.589030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.605247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.605306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.656 [2024-10-09 08:04:52.605325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.188 ms 00:24:50.656 [2024-10-09 08:04:52.605360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.621983] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:50.656 [2024-10-09 08:04:52.622055] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:50.656 [2024-10-09 08:04:52.622078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.622091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:50.656 [2024-10-09 08:04:52.622107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.516 ms 00:24:50.656 [2024-10-09 08:04:52.622118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.656 [2024-10-09 08:04:52.658289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.656 [2024-10-09 08:04:52.658412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:50.656 [2024-10-09 08:04:52.658467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.088 ms 00:24:50.656 [2024-10-09 08:04:52.658489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.675518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.675781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:50.915 [2024-10-09 08:04:52.675814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.915 ms 00:24:50.915 [2024-10-09 08:04:52.675826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.691820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.691892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:50.915 [2024-10-09 08:04:52.691913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.918 ms 00:24:50.915 [2024-10-09 08:04:52.691926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.692854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.692884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:50.915 [2024-10-09 08:04:52.692899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:24:50.915 [2024-10-09 08:04:52.692911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.767007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.767090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:50.915 [2024-10-09 08:04:52.767112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.068 ms 00:24:50.915 [2024-10-09 08:04:52.767124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.780236] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:50.915 [2024-10-09 08:04:52.783089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.783131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:50.915 [2024-10-09 08:04:52.783152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.878 ms 00:24:50.915 [2024-10-09 08:04:52.783164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.783298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.783321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:50.915 [2024-10-09 08:04:52.783351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:50.915 [2024-10-09 08:04:52.783366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.783465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.783492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:50.915 [2024-10-09 08:04:52.783505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:50.915 [2024-10-09 08:04:52.783516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.783550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.783566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:50.915 [2024-10-09 08:04:52.783578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:50.915 [2024-10-09 08:04:52.783590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.783633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:50.915 [2024-10-09 08:04:52.783654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.783694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:50.915 [2024-10-09 08:04:52.783714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:50.915 [2024-10-09 08:04:52.783725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.817115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.817401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:50.915 [2024-10-09 08:04:52.817540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.359 ms 00:24:50.915 [2024-10-09 08:04:52.817653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.817826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.915 [2024-10-09 08:04:52.817888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:50.915 [2024-10-09 08:04:52.818030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:50.915 [2024-10-09 08:04:52.818093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.915 [2024-10-09 08:04:52.819556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.700 ms, result 0 00:24:51.849  [2024-10-09T08:04:55.238Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-09T08:04:56.172Z] Copying: 54/1024 [MB] (26 MBps) [2024-10-09T08:04:57.108Z] Copying: 81/1024 [MB] (27 MBps) [2024-10-09T08:04:58.042Z] Copying: 107/1024 [MB] (25 MBps) [2024-10-09T08:04:58.977Z] Copying: 134/1024 [MB] (27 MBps) [2024-10-09T08:04:59.910Z] Copying: 162/1024 [MB] (27 MBps) [2024-10-09T08:05:00.843Z] Copying: 188/1024 [MB] (25 MBps) [2024-10-09T08:05:02.219Z] Copying: 214/1024 [MB] (26 MBps) [2024-10-09T08:05:03.154Z] Copying: 241/1024 [MB] (27 MBps) [2024-10-09T08:05:04.088Z] Copying: 268/1024 [MB] (26 MBps) [2024-10-09T08:05:05.023Z] Copying: 294/1024 [MB] (26 MBps) [2024-10-09T08:05:05.958Z] Copying: 321/1024 [MB] (26 MBps) [2024-10-09T08:05:06.894Z] Copying: 348/1024 [MB] (27 MBps) [2024-10-09T08:05:07.873Z] Copying: 373/1024 [MB] (25 MBps) [2024-10-09T08:05:09.271Z] Copying: 400/1024 [MB] (26 MBps) [2024-10-09T08:05:09.838Z] Copying: 427/1024 [MB] (26 MBps) [2024-10-09T08:05:11.214Z] Copying: 453/1024 [MB] (26 MBps) [2024-10-09T08:05:12.149Z] Copying: 479/1024 [MB] (25 MBps) [2024-10-09T08:05:13.085Z] Copying: 505/1024 [MB] (26 MBps) [2024-10-09T08:05:14.021Z] Copying: 532/1024 [MB] (26 MBps) [2024-10-09T08:05:14.957Z] Copying: 559/1024 [MB] (26 MBps) [2024-10-09T08:05:15.892Z] Copying: 587/1024 [MB] (28 MBps) [2024-10-09T08:05:17.267Z] Copying: 614/1024 [MB] (27 MBps) [2024-10-09T08:05:17.836Z] Copying: 642/1024 [MB] (27 MBps) [2024-10-09T08:05:19.211Z] Copying: 670/1024 [MB] (28 MBps) [2024-10-09T08:05:20.145Z] Copying: 699/1024 [MB] (28 MBps) [2024-10-09T08:05:21.078Z] Copying: 727/1024 [MB] (28 MBps) [2024-10-09T08:05:22.012Z] Copying: 756/1024 [MB] (28 MBps) [2024-10-09T08:05:23.009Z] Copying: 784/1024 [MB] (28 MBps) [2024-10-09T08:05:23.946Z] Copying: 813/1024 [MB] (28 MBps) [2024-10-09T08:05:24.881Z] Copying: 841/1024 [MB] (28 MBps) [2024-10-09T08:05:26.256Z] Copying: 868/1024 [MB] (26 MBps) [2024-10-09T08:05:26.847Z] Copying: 895/1024 [MB] (26 MBps) [2024-10-09T08:05:28.220Z] Copying: 923/1024 [MB] (27 MBps) [2024-10-09T08:05:29.154Z] Copying: 950/1024 [MB] (27 MBps) [2024-10-09T08:05:30.088Z] Copying: 977/1024 [MB] (27 MBps) [2024-10-09T08:05:31.022Z] Copying: 1004/1024 [MB] (27 MBps) [2024-10-09T08:05:31.988Z] Copying: 1023/1024 [MB] (18 MBps) [2024-10-09T08:05:31.988Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-09 08:05:31.761692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.761788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:29.976 [2024-10-09 08:05:31.761817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:29.976 [2024-10-09 08:05:31.761834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.765404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.976 [2024-10-09 08:05:31.772472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.772527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.976 [2024-10-09 08:05:31.772546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.989 ms 00:25:29.976 [2024-10-09 08:05:31.772558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.784801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.784885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.976 [2024-10-09 08:05:31.784914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.100 ms 00:25:29.976 [2024-10-09 08:05:31.784930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.809944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.810051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.976 [2024-10-09 08:05:31.810087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.979 ms 00:25:29.976 [2024-10-09 08:05:31.810109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.817282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.817349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:29.976 [2024-10-09 08:05:31.817378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.114 ms 00:25:29.976 [2024-10-09 08:05:31.817391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.851809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.852110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:29.976 [2024-10-09 08:05:31.852142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.347 ms 00:25:29.976 [2024-10-09 08:05:31.852163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.871106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.871398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:29.976 [2024-10-09 08:05:31.871430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.857 ms 00:25:29.976 [2024-10-09 08:05:31.871457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.947027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.947323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.976 [2024-10-09 08:05:31.947370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.485 ms 00:25:29.976 [2024-10-09 08:05:31.947385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.976 [2024-10-09 08:05:31.979567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.976 [2024-10-09 08:05:31.979639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:29.976 [2024-10-09 08:05:31.979659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.147 ms 00:25:29.976 [2024-10-09 08:05:31.979671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.256 [2024-10-09 08:05:32.011822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.256 [2024-10-09 08:05:32.011923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:30.256 [2024-10-09 08:05:32.011944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.074 ms 00:25:30.256 [2024-10-09 08:05:32.011956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.256 [2024-10-09 08:05:32.043091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.256 [2024-10-09 08:05:32.043181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.256 [2024-10-09 08:05:32.043202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.066 ms 00:25:30.256 [2024-10-09 08:05:32.043215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.256 [2024-10-09 08:05:32.074339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.256 [2024-10-09 08:05:32.074409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.256 [2024-10-09 08:05:32.074431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.936 ms 00:25:30.256 [2024-10-09 08:05:32.074442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.256 [2024-10-09 08:05:32.074504] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.256 [2024-10-09 08:05:32.074529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:25:30.256 [2024-10-09 08:05:32.074544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.074998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:30.256 [2024-10-09 08:05:32.075493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:30.257 [2024-10-09 08:05:32.075744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:30.257 [2024-10-09 08:05:32.075756] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 79fb1636-bf1a-4768-a170-c1f22467d828 00:25:30.257 [2024-10-09 08:05:32.075768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:25:30.257 [2024-10-09 08:05:32.075778] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:25:30.257 [2024-10-09 08:05:32.075789] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:25:30.257 [2024-10-09 08:05:32.075800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:25:30.257 [2024-10-09 08:05:32.075812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:30.257 [2024-10-09 08:05:32.075851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:30.257 [2024-10-09 08:05:32.075877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:30.257 [2024-10-09 08:05:32.075887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:30.257 [2024-10-09 08:05:32.075897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:30.257 [2024-10-09 08:05:32.075908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.257 [2024-10-09 08:05:32.075925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:30.257 [2024-10-09 08:05:32.075937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:25:30.257 [2024-10-09 08:05:32.075948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.257 [2024-10-09 08:05:32.092614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.257 [2024-10-09 08:05:32.092675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:30.257 [2024-10-09 08:05:32.092694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.608 ms 00:25:30.257 [2024-10-09 08:05:32.092706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.257 [2024-10-09 08:05:32.093161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.257 [2024-10-09 08:05:32.093179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:30.257 [2024-10-09 08:05:32.093192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:25:30.257 [2024-10-09 08:05:32.093203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.257 [2024-10-09 08:05:32.130258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.257 [2024-10-09 08:05:32.130343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.257 [2024-10-09 08:05:32.130364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.257 [2024-10-09 08:05:32.130382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.257 [2024-10-09 08:05:32.130464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.257 [2024-10-09 08:05:32.130480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.257 [2024-10-09 08:05:32.130492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.257 [2024-10-09 08:05:32.130503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.257 [2024-10-09 08:05:32.130598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.257 [2024-10-09 08:05:32.130617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.257 [2024-10-09 08:05:32.130630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.257 [2024-10-09 08:05:32.130642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.257 [2024-10-09 08:05:32.130671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.257 [2024-10-09 08:05:32.130684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.257 [2024-10-09 08:05:32.130695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.257 [2024-10-09 08:05:32.130705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.257 [2024-10-09 08:05:32.236894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.257 [2024-10-09 08:05:32.237172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.257 [2024-10-09 08:05:32.237205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.257 [2024-10-09 08:05:32.237218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.515 [2024-10-09 08:05:32.323649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.515 [2024-10-09 08:05:32.323739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.515 [2024-10-09 08:05:32.323762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.515 [2024-10-09 08:05:32.323774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.515 [2024-10-09 08:05:32.323905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.515 [2024-10-09 08:05:32.323924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.515 [2024-10-09 08:05:32.323936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.516 [2024-10-09 08:05:32.323948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.516 [2024-10-09 08:05:32.323996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.516 [2024-10-09 08:05:32.324017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.516 [2024-10-09 08:05:32.324029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.516 [2024-10-09 08:05:32.324039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.516 [2024-10-09 08:05:32.324180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.516 [2024-10-09 08:05:32.324200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.516 [2024-10-09 08:05:32.324213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.516 [2024-10-09 08:05:32.324224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.516 [2024-10-09 08:05:32.324269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.516 [2024-10-09 08:05:32.324286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:30.516 [2024-10-09 08:05:32.324305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.516 [2024-10-09 08:05:32.324316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.516 [2024-10-09 08:05:32.324393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.516 [2024-10-09 08:05:32.324412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.516 [2024-10-09 08:05:32.324436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.516 [2024-10-09 08:05:32.324447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.516 [2024-10-09 08:05:32.324500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.516 [2024-10-09 08:05:32.324523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.516 [2024-10-09 08:05:32.324535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.516 [2024-10-09 08:05:32.324546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.516 [2024-10-09 08:05:32.324687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 565.482 ms, result 0 00:25:31.891 00:25:31.891 00:25:31.891 08:05:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:34.424 08:05:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:34.424 [2024-10-09 08:05:36.214028] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:25:34.424 [2024-10-09 08:05:36.214221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80354 ] 00:25:34.424 [2024-10-09 08:05:36.378661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.683 [2024-10-09 08:05:36.565160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.941 [2024-10-09 08:05:36.925624] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.941 [2024-10-09 08:05:36.925700] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.201 [2024-10-09 08:05:37.089027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.201 [2024-10-09 08:05:37.089097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.201 [2024-10-09 08:05:37.089118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:35.201 [2024-10-09 08:05:37.089130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.089203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.089222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.202 [2024-10-09 08:05:37.089235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:35.202 [2024-10-09 08:05:37.089246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.089277] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.202 [2024-10-09 08:05:37.090201] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.202 [2024-10-09 08:05:37.090243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.090256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.202 [2024-10-09 08:05:37.090269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:25:35.202 [2024-10-09 08:05:37.090280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.091433] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.202 [2024-10-09 08:05:37.108170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.108249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.202 [2024-10-09 08:05:37.108283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.738 ms 00:25:35.202 [2024-10-09 08:05:37.108294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.108396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.108417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.202 [2024-10-09 08:05:37.108430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:35.202 [2024-10-09 08:05:37.108441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.113017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.113218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.202 [2024-10-09 08:05:37.113248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.466 ms 00:25:35.202 [2024-10-09 08:05:37.113261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.113391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.113413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.202 [2024-10-09 08:05:37.113426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:35.202 [2024-10-09 08:05:37.113437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.113505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.113523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.202 [2024-10-09 08:05:37.113536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:35.202 [2024-10-09 08:05:37.113547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.113581] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.202 [2024-10-09 08:05:37.117831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.117866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.202 [2024-10-09 08:05:37.117914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:25:35.202 [2024-10-09 08:05:37.117925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.117963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.117977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.202 [2024-10-09 08:05:37.117989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:35.202 [2024-10-09 08:05:37.118001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.118053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.202 [2024-10-09 08:05:37.118083] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.202 [2024-10-09 08:05:37.118141] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.202 [2024-10-09 08:05:37.118162] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.202 [2024-10-09 08:05:37.118274] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.202 [2024-10-09 08:05:37.118289] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.202 [2024-10-09 08:05:37.118304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.202 [2024-10-09 08:05:37.118324] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.202 [2024-10-09 08:05:37.118370] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.202 [2024-10-09 08:05:37.118384] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.202 [2024-10-09 08:05:37.118396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.202 [2024-10-09 08:05:37.118406] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.202 [2024-10-09 08:05:37.118417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.202 [2024-10-09 08:05:37.118429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.118440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.202 [2024-10-09 08:05:37.118452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:25:35.202 [2024-10-09 08:05:37.118463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.118566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.202 [2024-10-09 08:05:37.118586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.202 [2024-10-09 08:05:37.118598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:35.202 [2024-10-09 08:05:37.118609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.202 [2024-10-09 08:05:37.118755] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.202 [2024-10-09 08:05:37.118783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.202 [2024-10-09 08:05:37.118796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.202 [2024-10-09 08:05:37.118808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.202 [2024-10-09 08:05:37.118819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.202 [2024-10-09 08:05:37.118829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.202 [2024-10-09 08:05:37.118840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.202 [2024-10-09 08:05:37.118851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.202 [2024-10-09 08:05:37.118861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.202 [2024-10-09 08:05:37.118872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.202 [2024-10-09 08:05:37.118882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.202 [2024-10-09 08:05:37.118892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.202 [2024-10-09 08:05:37.118902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.202 [2024-10-09 08:05:37.118927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.202 [2024-10-09 08:05:37.118938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.202 [2024-10-09 08:05:37.118948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.202 [2024-10-09 08:05:37.118964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.202 [2024-10-09 08:05:37.118975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.202 [2024-10-09 08:05:37.118985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.202 [2024-10-09 08:05:37.118996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.202 [2024-10-09 08:05:37.119006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.202 [2024-10-09 08:05:37.119026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.202 [2024-10-09 08:05:37.119037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.202 [2024-10-09 08:05:37.119057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.202 [2024-10-09 08:05:37.119067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.202 [2024-10-09 08:05:37.119088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.202 [2024-10-09 08:05:37.119098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.202 [2024-10-09 08:05:37.119119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.202 [2024-10-09 08:05:37.119129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.202 [2024-10-09 08:05:37.119149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.202 [2024-10-09 08:05:37.119159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.202 [2024-10-09 08:05:37.119169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.202 [2024-10-09 08:05:37.119181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.202 [2024-10-09 08:05:37.119191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.202 [2024-10-09 08:05:37.119201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.202 [2024-10-09 08:05:37.119222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.202 [2024-10-09 08:05:37.119232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119241] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.202 [2024-10-09 08:05:37.119253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.202 [2024-10-09 08:05:37.119268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.202 [2024-10-09 08:05:37.119280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.202 [2024-10-09 08:05:37.119291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.202 [2024-10-09 08:05:37.119304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.203 [2024-10-09 08:05:37.119315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.203 [2024-10-09 08:05:37.119325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.203 [2024-10-09 08:05:37.119353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.203 [2024-10-09 08:05:37.119365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.203 [2024-10-09 08:05:37.119377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.203 [2024-10-09 08:05:37.119390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.203 [2024-10-09 08:05:37.119403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.203 [2024-10-09 08:05:37.119414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.203 [2024-10-09 08:05:37.119426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.203 [2024-10-09 08:05:37.119437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.203 [2024-10-09 08:05:37.119448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.203 [2024-10-09 08:05:37.119473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.203 [2024-10-09 08:05:37.119484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.203 [2024-10-09 08:05:37.119511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.203 [2024-10-09 08:05:37.119522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.203 [2024-10-09 08:05:37.119532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.203 [2024-10-09 08:05:37.119542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.203 [2024-10-09 08:05:37.119552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.203 [2024-10-09 08:05:37.119564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.203 [2024-10-09 08:05:37.119575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.203 [2024-10-09 08:05:37.119586] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.203 [2024-10-09 08:05:37.119597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.203 [2024-10-09 08:05:37.119608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.203 [2024-10-09 08:05:37.119619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.203 [2024-10-09 08:05:37.119630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.203 [2024-10-09 08:05:37.119640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.203 [2024-10-09 08:05:37.119652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.203 [2024-10-09 08:05:37.119662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.203 [2024-10-09 08:05:37.119673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:25:35.203 [2024-10-09 08:05:37.119710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.203 [2024-10-09 08:05:37.158483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.203 [2024-10-09 08:05:37.158800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.203 [2024-10-09 08:05:37.158936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.704 ms 00:25:35.203 [2024-10-09 08:05:37.158988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.203 [2024-10-09 08:05:37.159264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.203 [2024-10-09 08:05:37.159315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.203 [2024-10-09 08:05:37.159545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:35.203 [2024-10-09 08:05:37.159600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.203 [2024-10-09 08:05:37.200798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.203 [2024-10-09 08:05:37.201070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.203 [2024-10-09 08:05:37.201206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.049 ms 00:25:35.203 [2024-10-09 08:05:37.201258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.203 [2024-10-09 08:05:37.201453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.203 [2024-10-09 08:05:37.201510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.203 [2024-10-09 08:05:37.201551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:35.203 [2024-10-09 08:05:37.201665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.203 [2024-10-09 08:05:37.202099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.203 [2024-10-09 08:05:37.202244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.203 [2024-10-09 08:05:37.202270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:25:35.203 [2024-10-09 08:05:37.202293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.203 [2024-10-09 08:05:37.202476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.203 [2024-10-09 08:05:37.202497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.203 [2024-10-09 08:05:37.202509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:25:35.203 [2024-10-09 08:05:37.202521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.462 [2024-10-09 08:05:37.219167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.462 [2024-10-09 08:05:37.219234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.462 [2024-10-09 08:05:37.219251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.616 ms 00:25:35.462 [2024-10-09 08:05:37.219264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.462 [2024-10-09 08:05:37.236387] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:35.462 [2024-10-09 08:05:37.236439] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.462 [2024-10-09 08:05:37.236474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.462 [2024-10-09 08:05:37.236486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.462 [2024-10-09 08:05:37.236499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.008 ms 00:25:35.462 [2024-10-09 08:05:37.236509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.462 [2024-10-09 08:05:37.267458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.462 [2024-10-09 08:05:37.267506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.462 [2024-10-09 08:05:37.267525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.886 ms 00:25:35.462 [2024-10-09 08:05:37.267537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.462 [2024-10-09 08:05:37.284569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.462 [2024-10-09 08:05:37.284612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.462 [2024-10-09 08:05:37.284629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.963 ms 00:25:35.462 [2024-10-09 08:05:37.284641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.300923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.300965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.463 [2024-10-09 08:05:37.300997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.237 ms 00:25:35.463 [2024-10-09 08:05:37.301008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.301845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.301881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.463 [2024-10-09 08:05:37.301897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:25:35.463 [2024-10-09 08:05:37.301908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.377478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.377587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.463 [2024-10-09 08:05:37.377624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.543 ms 00:25:35.463 [2024-10-09 08:05:37.377636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.391055] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:35.463 [2024-10-09 08:05:37.393824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.393860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.463 [2024-10-09 08:05:37.393894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.107 ms 00:25:35.463 [2024-10-09 08:05:37.393912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.394038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.394058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:35.463 [2024-10-09 08:05:37.394072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:35.463 [2024-10-09 08:05:37.394084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.395745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.395782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.463 [2024-10-09 08:05:37.395798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:25:35.463 [2024-10-09 08:05:37.395809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.395853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.395868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.463 [2024-10-09 08:05:37.395880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:35.463 [2024-10-09 08:05:37.395891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.395931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:35.463 [2024-10-09 08:05:37.395948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.395958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:35.463 [2024-10-09 08:05:37.395970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:35.463 [2024-10-09 08:05:37.395986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.430115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.430499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.463 [2024-10-09 08:05:37.430532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.101 ms 00:25:35.463 [2024-10-09 08:05:37.430548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.430670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.463 [2024-10-09 08:05:37.430690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.463 [2024-10-09 08:05:37.430703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:35.463 [2024-10-09 08:05:37.430714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.463 [2024-10-09 08:05:37.431957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 342.374 ms, result 0 00:25:36.843  [2024-10-09T08:05:39.790Z] Copying: 1584/1048576 [kB] (1584 kBps) [2024-10-09T08:05:40.726Z] Copying: 4936/1048576 [kB] (3352 kBps) [2024-10-09T08:05:41.662Z] Copying: 23/1024 [MB] (19 MBps) [2024-10-09T08:05:43.069Z] Copying: 54/1024 [MB] (30 MBps) [2024-10-09T08:05:44.004Z] Copying: 84/1024 [MB] (30 MBps) [2024-10-09T08:05:44.939Z] Copying: 114/1024 [MB] (30 MBps) [2024-10-09T08:05:45.874Z] Copying: 144/1024 [MB] (30 MBps) [2024-10-09T08:05:46.810Z] Copying: 175/1024 [MB] (30 MBps) [2024-10-09T08:05:47.761Z] Copying: 205/1024 [MB] (30 MBps) [2024-10-09T08:05:48.695Z] Copying: 236/1024 [MB] (30 MBps) [2024-10-09T08:05:50.070Z] Copying: 266/1024 [MB] (30 MBps) [2024-10-09T08:05:51.059Z] Copying: 296/1024 [MB] (30 MBps) [2024-10-09T08:05:51.994Z] Copying: 327/1024 [MB] (30 MBps) [2024-10-09T08:05:52.929Z] Copying: 356/1024 [MB] (29 MBps) [2024-10-09T08:05:53.865Z] Copying: 387/1024 [MB] (30 MBps) [2024-10-09T08:05:54.799Z] Copying: 418/1024 [MB] (31 MBps) [2024-10-09T08:05:55.734Z] Copying: 447/1024 [MB] (29 MBps) [2024-10-09T08:05:56.670Z] Copying: 474/1024 [MB] (26 MBps) [2024-10-09T08:05:58.046Z] Copying: 505/1024 [MB] (31 MBps) [2024-10-09T08:05:58.982Z] Copying: 536/1024 [MB] (30 MBps) [2024-10-09T08:05:59.918Z] Copying: 566/1024 [MB] (30 MBps) [2024-10-09T08:06:00.852Z] Copying: 597/1024 [MB] (30 MBps) [2024-10-09T08:06:01.787Z] Copying: 625/1024 [MB] (28 MBps) [2024-10-09T08:06:02.753Z] Copying: 655/1024 [MB] (29 MBps) [2024-10-09T08:06:03.688Z] Copying: 686/1024 [MB] (30 MBps) [2024-10-09T08:06:05.064Z] Copying: 715/1024 [MB] (29 MBps) [2024-10-09T08:06:05.998Z] Copying: 745/1024 [MB] (29 MBps) [2024-10-09T08:06:06.933Z] Copying: 774/1024 [MB] (29 MBps) [2024-10-09T08:06:07.871Z] Copying: 804/1024 [MB] (30 MBps) [2024-10-09T08:06:08.806Z] Copying: 835/1024 [MB] (30 MBps) [2024-10-09T08:06:09.741Z] Copying: 863/1024 [MB] (28 MBps) [2024-10-09T08:06:10.676Z] Copying: 894/1024 [MB] (30 MBps) [2024-10-09T08:06:12.050Z] Copying: 925/1024 [MB] (30 MBps) [2024-10-09T08:06:13.018Z] Copying: 956/1024 [MB] (31 MBps) [2024-10-09T08:06:13.950Z] Copying: 987/1024 [MB] (30 MBps) [2024-10-09T08:06:13.950Z] Copying: 1018/1024 [MB] (31 MBps) [2024-10-09T08:06:14.207Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-09 08:06:14.146552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.195 [2024-10-09 08:06:14.146925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:12.195 [2024-10-09 08:06:14.146972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:12.195 [2024-10-09 08:06:14.146993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.195 [2024-10-09 08:06:14.147050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:12.195 [2024-10-09 08:06:14.151646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.195 [2024-10-09 08:06:14.151716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:12.195 [2024-10-09 08:06:14.151737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:26:12.195 [2024-10-09 08:06:14.151751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.195 [2024-10-09 08:06:14.152049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.195 [2024-10-09 08:06:14.152076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:12.195 [2024-10-09 08:06:14.152092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:26:12.195 [2024-10-09 08:06:14.152105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.195 [2024-10-09 08:06:14.164083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.195 [2024-10-09 08:06:14.164183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:12.195 [2024-10-09 08:06:14.164207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.952 ms 00:26:12.195 [2024-10-09 08:06:14.164233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.195 [2024-10-09 08:06:14.173596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.195 [2024-10-09 08:06:14.173647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:12.195 [2024-10-09 08:06:14.173667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.312 ms 00:26:12.195 [2024-10-09 08:06:14.173681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.211758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.455 [2024-10-09 08:06:14.211815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:12.455 [2024-10-09 08:06:14.211837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.974 ms 00:26:12.455 [2024-10-09 08:06:14.211851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.229667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.455 [2024-10-09 08:06:14.229724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:12.455 [2024-10-09 08:06:14.229743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:26:12.455 [2024-10-09 08:06:14.229755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.231293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.455 [2024-10-09 08:06:14.231355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:12.455 [2024-10-09 08:06:14.231374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:26:12.455 [2024-10-09 08:06:14.231386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.263165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.455 [2024-10-09 08:06:14.263230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:12.455 [2024-10-09 08:06:14.263250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.754 ms 00:26:12.455 [2024-10-09 08:06:14.263261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.294553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.455 [2024-10-09 08:06:14.294611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:12.455 [2024-10-09 08:06:14.294629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.237 ms 00:26:12.455 [2024-10-09 08:06:14.294641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.325653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.455 [2024-10-09 08:06:14.325710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:12.455 [2024-10-09 08:06:14.325729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.953 ms 00:26:12.455 [2024-10-09 08:06:14.325740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.356693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.455 [2024-10-09 08:06:14.356932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:12.455 [2024-10-09 08:06:14.356963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.850 ms 00:26:12.455 [2024-10-09 08:06:14.356975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.455 [2024-10-09 08:06:14.357030] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:12.455 [2024-10-09 08:06:14.357055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:12.455 [2024-10-09 08:06:14.357080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:12.455 [2024-10-09 08:06:14.357093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:12.455 [2024-10-09 08:06:14.357752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.357989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:12.456 [2024-10-09 08:06:14.358293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:12.456 [2024-10-09 08:06:14.358304] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 79fb1636-bf1a-4768-a170-c1f22467d828 00:26:12.456 [2024-10-09 08:06:14.358316] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:12.456 [2024-10-09 08:06:14.358327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:26:12.456 [2024-10-09 08:06:14.358352] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:26:12.456 [2024-10-09 08:06:14.358365] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:26:12.456 [2024-10-09 08:06:14.358375] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:12.456 [2024-10-09 08:06:14.358387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:12.456 [2024-10-09 08:06:14.358397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:12.456 [2024-10-09 08:06:14.358407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:12.456 [2024-10-09 08:06:14.358417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:12.456 [2024-10-09 08:06:14.358428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.456 [2024-10-09 08:06:14.358439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:12.456 [2024-10-09 08:06:14.358466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:26:12.456 [2024-10-09 08:06:14.358482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.456 [2024-10-09 08:06:14.375195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.456 [2024-10-09 08:06:14.375244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:12.456 [2024-10-09 08:06:14.375262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.643 ms 00:26:12.456 [2024-10-09 08:06:14.375273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.456 [2024-10-09 08:06:14.375746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.456 [2024-10-09 08:06:14.375834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:12.456 [2024-10-09 08:06:14.375856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:26:12.456 [2024-10-09 08:06:14.375868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.456 [2024-10-09 08:06:14.413137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.456 [2024-10-09 08:06:14.413380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:12.456 [2024-10-09 08:06:14.413504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.456 [2024-10-09 08:06:14.413556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.456 [2024-10-09 08:06:14.413665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.456 [2024-10-09 08:06:14.413830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:12.456 [2024-10-09 08:06:14.413855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.456 [2024-10-09 08:06:14.413867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.456 [2024-10-09 08:06:14.413967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.456 [2024-10-09 08:06:14.413987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:12.456 [2024-10-09 08:06:14.414000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.456 [2024-10-09 08:06:14.414011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.456 [2024-10-09 08:06:14.414034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.456 [2024-10-09 08:06:14.414047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:12.456 [2024-10-09 08:06:14.414067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.456 [2024-10-09 08:06:14.414078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.517932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.518004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:12.714 [2024-10-09 08:06:14.518023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.518034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.602624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.602908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:12.714 [2024-10-09 08:06:14.602940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.602953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.603064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.603082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:12.714 [2024-10-09 08:06:14.603095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.603106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.603152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.603167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:12.714 [2024-10-09 08:06:14.603179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.603195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.603327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.603379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:12.714 [2024-10-09 08:06:14.603392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.603404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.603455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.603474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:12.714 [2024-10-09 08:06:14.603487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.603497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.603550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.603565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:12.714 [2024-10-09 08:06:14.603576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.603587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.603638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.714 [2024-10-09 08:06:14.603655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:12.714 [2024-10-09 08:06:14.603666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.714 [2024-10-09 08:06:14.603695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.714 [2024-10-09 08:06:14.603838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 457.266 ms, result 0 00:26:14.093 00:26:14.093 00:26:14.093 08:06:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:15.993 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:15.993 08:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:16.251 [2024-10-09 08:06:18.023175] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:26:16.251 [2024-10-09 08:06:18.023349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80767 ] 00:26:16.251 [2024-10-09 08:06:18.186931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.510 [2024-10-09 08:06:18.379010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.769 [2024-10-09 08:06:18.705790] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.770 [2024-10-09 08:06:18.705869] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.030 [2024-10-09 08:06:18.866799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.866863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:17.030 [2024-10-09 08:06:18.866885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:17.030 [2024-10-09 08:06:18.866897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.866970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.866990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.030 [2024-10-09 08:06:18.867003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:17.030 [2024-10-09 08:06:18.867015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.867047] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:17.030 [2024-10-09 08:06:18.867994] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:17.030 [2024-10-09 08:06:18.868038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.868053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.030 [2024-10-09 08:06:18.868066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:26:17.030 [2024-10-09 08:06:18.868078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.869200] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:17.030 [2024-10-09 08:06:18.885294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.885353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:17.030 [2024-10-09 08:06:18.885373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.095 ms 00:26:17.030 [2024-10-09 08:06:18.885385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.885457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.885476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:17.030 [2024-10-09 08:06:18.885489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:17.030 [2024-10-09 08:06:18.885500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.889775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.889822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.030 [2024-10-09 08:06:18.889839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:26:17.030 [2024-10-09 08:06:18.889851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.889949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.889967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.030 [2024-10-09 08:06:18.889980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:17.030 [2024-10-09 08:06:18.889991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.890057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.890075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:17.030 [2024-10-09 08:06:18.890088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:17.030 [2024-10-09 08:06:18.890099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.890133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.030 [2024-10-09 08:06:18.894373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.894411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.030 [2024-10-09 08:06:18.894427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.249 ms 00:26:17.030 [2024-10-09 08:06:18.894438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.894486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.894501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:17.030 [2024-10-09 08:06:18.894514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:17.030 [2024-10-09 08:06:18.894525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.894577] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:17.030 [2024-10-09 08:06:18.894607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:17.030 [2024-10-09 08:06:18.894651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:17.030 [2024-10-09 08:06:18.894671] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:17.030 [2024-10-09 08:06:18.894782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:17.030 [2024-10-09 08:06:18.894797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:17.030 [2024-10-09 08:06:18.894812] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:17.030 [2024-10-09 08:06:18.894831] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:17.030 [2024-10-09 08:06:18.894845] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:17.030 [2024-10-09 08:06:18.894857] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:17.030 [2024-10-09 08:06:18.894868] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:17.030 [2024-10-09 08:06:18.894879] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:17.030 [2024-10-09 08:06:18.894890] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:17.030 [2024-10-09 08:06:18.894901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.894913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:17.030 [2024-10-09 08:06:18.894925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:26:17.030 [2024-10-09 08:06:18.894936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.895036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.030 [2024-10-09 08:06:18.895056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:17.030 [2024-10-09 08:06:18.895069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:17.030 [2024-10-09 08:06:18.895079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.030 [2024-10-09 08:06:18.895199] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:17.030 [2024-10-09 08:06:18.895218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:17.030 [2024-10-09 08:06:18.895231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.030 [2024-10-09 08:06:18.895243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.030 [2024-10-09 08:06:18.895254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:17.030 [2024-10-09 08:06:18.895265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:17.030 [2024-10-09 08:06:18.895276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:17.030 [2024-10-09 08:06:18.895288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:17.030 [2024-10-09 08:06:18.895299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:17.030 [2024-10-09 08:06:18.895310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.030 [2024-10-09 08:06:18.895320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:17.030 [2024-10-09 08:06:18.895353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:17.030 [2024-10-09 08:06:18.895368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.030 [2024-10-09 08:06:18.895392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:17.030 [2024-10-09 08:06:18.895404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:17.030 [2024-10-09 08:06:18.895416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.030 [2024-10-09 08:06:18.895426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:17.031 [2024-10-09 08:06:18.895437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:17.031 [2024-10-09 08:06:18.895448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:17.031 [2024-10-09 08:06:18.895469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.031 [2024-10-09 08:06:18.895490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:17.031 [2024-10-09 08:06:18.895500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.031 [2024-10-09 08:06:18.895522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:17.031 [2024-10-09 08:06:18.895532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.031 [2024-10-09 08:06:18.895561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:17.031 [2024-10-09 08:06:18.895573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.031 [2024-10-09 08:06:18.895593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:17.031 [2024-10-09 08:06:18.895604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.031 [2024-10-09 08:06:18.895624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:17.031 [2024-10-09 08:06:18.895635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:17.031 [2024-10-09 08:06:18.895645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.031 [2024-10-09 08:06:18.895656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:17.031 [2024-10-09 08:06:18.895667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:17.031 [2024-10-09 08:06:18.895677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:17.031 [2024-10-09 08:06:18.895708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:17.031 [2024-10-09 08:06:18.895719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895729] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:17.031 [2024-10-09 08:06:18.895740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:17.031 [2024-10-09 08:06:18.895757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.031 [2024-10-09 08:06:18.895769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.031 [2024-10-09 08:06:18.895781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:17.031 [2024-10-09 08:06:18.895792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:17.031 [2024-10-09 08:06:18.895803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:17.031 [2024-10-09 08:06:18.895814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:17.031 [2024-10-09 08:06:18.895824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:17.031 [2024-10-09 08:06:18.895834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:17.031 [2024-10-09 08:06:18.895847] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:17.031 [2024-10-09 08:06:18.895861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.031 [2024-10-09 08:06:18.895873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:17.031 [2024-10-09 08:06:18.895885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:17.031 [2024-10-09 08:06:18.895896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:17.031 [2024-10-09 08:06:18.895908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:17.031 [2024-10-09 08:06:18.895919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:17.031 [2024-10-09 08:06:18.895931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:17.031 [2024-10-09 08:06:18.895942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:17.031 [2024-10-09 08:06:18.895953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:17.031 [2024-10-09 08:06:18.895964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:17.031 [2024-10-09 08:06:18.895976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:17.031 [2024-10-09 08:06:18.895987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:17.031 [2024-10-09 08:06:18.895998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:17.031 [2024-10-09 08:06:18.896010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:17.031 [2024-10-09 08:06:18.896021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:17.031 [2024-10-09 08:06:18.896032] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:17.031 [2024-10-09 08:06:18.896045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.031 [2024-10-09 08:06:18.896057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.031 [2024-10-09 08:06:18.896068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:17.031 [2024-10-09 08:06:18.896079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:17.031 [2024-10-09 08:06:18.896091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:17.031 [2024-10-09 08:06:18.896103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.896115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:17.031 [2024-10-09 08:06:18.896126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:26:17.031 [2024-10-09 08:06:18.896137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:18.938388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.938448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.031 [2024-10-09 08:06:18.938469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.155 ms 00:26:17.031 [2024-10-09 08:06:18.938481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:18.938607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.938623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:17.031 [2024-10-09 08:06:18.938636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:17.031 [2024-10-09 08:06:18.938647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:18.978670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.978726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.031 [2024-10-09 08:06:18.978750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.923 ms 00:26:17.031 [2024-10-09 08:06:18.978762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:18.978831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.978848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.031 [2024-10-09 08:06:18.978860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:17.031 [2024-10-09 08:06:18.978872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:18.979276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.979296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.031 [2024-10-09 08:06:18.979310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:26:17.031 [2024-10-09 08:06:18.979327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:18.979521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.979541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.031 [2024-10-09 08:06:18.979553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:17.031 [2024-10-09 08:06:18.979564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:18.995480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:18.995525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.031 [2024-10-09 08:06:18.995543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.887 ms 00:26:17.031 [2024-10-09 08:06:18.995555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.031 [2024-10-09 08:06:19.011847] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:17.031 [2024-10-09 08:06:19.011892] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:17.031 [2024-10-09 08:06:19.011912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.031 [2024-10-09 08:06:19.011924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:17.031 [2024-10-09 08:06:19.011937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.203 ms 00:26:17.031 [2024-10-09 08:06:19.011948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.290 [2024-10-09 08:06:19.041716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.290 [2024-10-09 08:06:19.041763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:17.291 [2024-10-09 08:06:19.041781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.719 ms 00:26:17.291 [2024-10-09 08:06:19.041794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.057504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.057676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:17.291 [2024-10-09 08:06:19.057704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.658 ms 00:26:17.291 [2024-10-09 08:06:19.057716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.073164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.073325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:17.291 [2024-10-09 08:06:19.073371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.385 ms 00:26:17.291 [2024-10-09 08:06:19.073385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.074179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.074217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:17.291 [2024-10-09 08:06:19.074244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:26:17.291 [2024-10-09 08:06:19.074256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.146203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.146266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:17.291 [2024-10-09 08:06:19.146286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.921 ms 00:26:17.291 [2024-10-09 08:06:19.146298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.158969] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:17.291 [2024-10-09 08:06:19.161536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.161573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:17.291 [2024-10-09 08:06:19.161591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.151 ms 00:26:17.291 [2024-10-09 08:06:19.161609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.161724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.161744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:17.291 [2024-10-09 08:06:19.161757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:17.291 [2024-10-09 08:06:19.161769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.162395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.162423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:17.291 [2024-10-09 08:06:19.162437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:26:17.291 [2024-10-09 08:06:19.162448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.162490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.162506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:17.291 [2024-10-09 08:06:19.162518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:17.291 [2024-10-09 08:06:19.162529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.162571] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:17.291 [2024-10-09 08:06:19.162587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.162598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:17.291 [2024-10-09 08:06:19.162611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:17.291 [2024-10-09 08:06:19.162626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.193836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.193884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:17.291 [2024-10-09 08:06:19.193903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.184 ms 00:26:17.291 [2024-10-09 08:06:19.193915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.194005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.291 [2024-10-09 08:06:19.194025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:17.291 [2024-10-09 08:06:19.194038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:17.291 [2024-10-09 08:06:19.194049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.291 [2024-10-09 08:06:19.195314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.015 ms, result 0 00:26:18.668  [2024-10-09T08:06:21.615Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-09T08:06:22.550Z] Copying: 54/1024 [MB] (28 MBps) [2024-10-09T08:06:23.484Z] Copying: 82/1024 [MB] (27 MBps) [2024-10-09T08:06:24.440Z] Copying: 110/1024 [MB] (27 MBps) [2024-10-09T08:06:25.814Z] Copying: 138/1024 [MB] (27 MBps) [2024-10-09T08:06:26.753Z] Copying: 165/1024 [MB] (27 MBps) [2024-10-09T08:06:27.688Z] Copying: 191/1024 [MB] (26 MBps) [2024-10-09T08:06:28.623Z] Copying: 218/1024 [MB] (27 MBps) [2024-10-09T08:06:29.558Z] Copying: 246/1024 [MB] (27 MBps) [2024-10-09T08:06:30.494Z] Copying: 271/1024 [MB] (25 MBps) [2024-10-09T08:06:31.428Z] Copying: 299/1024 [MB] (27 MBps) [2024-10-09T08:06:32.803Z] Copying: 324/1024 [MB] (25 MBps) [2024-10-09T08:06:33.736Z] Copying: 350/1024 [MB] (26 MBps) [2024-10-09T08:06:34.671Z] Copying: 377/1024 [MB] (26 MBps) [2024-10-09T08:06:35.605Z] Copying: 403/1024 [MB] (26 MBps) [2024-10-09T08:06:36.539Z] Copying: 429/1024 [MB] (26 MBps) [2024-10-09T08:06:37.473Z] Copying: 456/1024 [MB] (26 MBps) [2024-10-09T08:06:38.850Z] Copying: 484/1024 [MB] (27 MBps) [2024-10-09T08:06:39.415Z] Copying: 512/1024 [MB] (28 MBps) [2024-10-09T08:06:40.787Z] Copying: 540/1024 [MB] (28 MBps) [2024-10-09T08:06:41.720Z] Copying: 568/1024 [MB] (28 MBps) [2024-10-09T08:06:42.672Z] Copying: 594/1024 [MB] (25 MBps) [2024-10-09T08:06:43.636Z] Copying: 619/1024 [MB] (24 MBps) [2024-10-09T08:06:44.566Z] Copying: 645/1024 [MB] (26 MBps) [2024-10-09T08:06:45.501Z] Copying: 674/1024 [MB] (29 MBps) [2024-10-09T08:06:46.434Z] Copying: 701/1024 [MB] (27 MBps) [2024-10-09T08:06:47.809Z] Copying: 728/1024 [MB] (26 MBps) [2024-10-09T08:06:48.742Z] Copying: 754/1024 [MB] (26 MBps) [2024-10-09T08:06:49.677Z] Copying: 782/1024 [MB] (27 MBps) [2024-10-09T08:06:50.611Z] Copying: 809/1024 [MB] (27 MBps) [2024-10-09T08:06:51.545Z] Copying: 837/1024 [MB] (27 MBps) [2024-10-09T08:06:52.480Z] Copying: 864/1024 [MB] (26 MBps) [2024-10-09T08:06:53.414Z] Copying: 892/1024 [MB] (28 MBps) [2024-10-09T08:06:54.788Z] Copying: 919/1024 [MB] (26 MBps) [2024-10-09T08:06:55.722Z] Copying: 947/1024 [MB] (28 MBps) [2024-10-09T08:06:56.657Z] Copying: 976/1024 [MB] (28 MBps) [2024-10-09T08:06:57.223Z] Copying: 1004/1024 [MB] (28 MBps) [2024-10-09T08:06:57.223Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-09 08:06:57.103875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.211 [2024-10-09 08:06:57.104159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:55.212 [2024-10-09 08:06:57.104198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:55.212 [2024-10-09 08:06:57.104214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.212 [2024-10-09 08:06:57.104268] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:55.212 [2024-10-09 08:06:57.108323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.212 [2024-10-09 08:06:57.108369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:55.212 [2024-10-09 08:06:57.108386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.027 ms 00:26:55.212 [2024-10-09 08:06:57.108399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.212 [2024-10-09 08:06:57.109037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.212 [2024-10-09 08:06:57.109074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:55.212 [2024-10-09 08:06:57.109102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:26:55.212 [2024-10-09 08:06:57.109126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.212 [2024-10-09 08:06:57.113475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.212 [2024-10-09 08:06:57.113534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:55.212 [2024-10-09 08:06:57.113553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.314 ms 00:26:55.212 [2024-10-09 08:06:57.113567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.212 [2024-10-09 08:06:57.121909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.212 [2024-10-09 08:06:57.121954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:55.212 [2024-10-09 08:06:57.121972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.310 ms 00:26:55.212 [2024-10-09 08:06:57.121985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.212 [2024-10-09 08:06:57.162764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.212 [2024-10-09 08:06:57.162825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:55.212 [2024-10-09 08:06:57.162846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.636 ms 00:26:55.212 [2024-10-09 08:06:57.162860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.212 [2024-10-09 08:06:57.193899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.212 [2024-10-09 08:06:57.193999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:55.212 [2024-10-09 08:06:57.194034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.996 ms 00:26:55.212 [2024-10-09 08:06:57.194061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.212 [2024-10-09 08:06:57.195854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.212 [2024-10-09 08:06:57.195914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:55.212 [2024-10-09 08:06:57.195940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:26:55.212 [2024-10-09 08:06:57.195963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.471 [2024-10-09 08:06:57.249106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.471 [2024-10-09 08:06:57.249164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:55.471 [2024-10-09 08:06:57.249185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.104 ms 00:26:55.471 [2024-10-09 08:06:57.249199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.471 [2024-10-09 08:06:57.287068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.471 [2024-10-09 08:06:57.287117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:55.471 [2024-10-09 08:06:57.287137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.832 ms 00:26:55.471 [2024-10-09 08:06:57.287151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.471 [2024-10-09 08:06:57.323017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.471 [2024-10-09 08:06:57.323079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:55.471 [2024-10-09 08:06:57.323097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.833 ms 00:26:55.471 [2024-10-09 08:06:57.323109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.471 [2024-10-09 08:06:57.354026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.471 [2024-10-09 08:06:57.354071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:55.471 [2024-10-09 08:06:57.354087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.837 ms 00:26:55.471 [2024-10-09 08:06:57.354098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.471 [2024-10-09 08:06:57.354128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:55.471 [2024-10-09 08:06:57.354148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:55.471 [2024-10-09 08:06:57.354177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:55.471 [2024-10-09 08:06:57.354190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:55.471 [2024-10-09 08:06:57.354202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:55.471 [2024-10-09 08:06:57.354214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.354994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:55.472 [2024-10-09 08:06:57.355340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:55.473 [2024-10-09 08:06:57.355363] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:55.473 [2024-10-09 08:06:57.355374] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 79fb1636-bf1a-4768-a170-c1f22467d828 00:26:55.473 [2024-10-09 08:06:57.355386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:55.473 [2024-10-09 08:06:57.355397] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:55.473 [2024-10-09 08:06:57.355408] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:55.473 [2024-10-09 08:06:57.355419] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:55.473 [2024-10-09 08:06:57.355430] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:55.473 [2024-10-09 08:06:57.355441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:55.473 [2024-10-09 08:06:57.355460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:55.473 [2024-10-09 08:06:57.355470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:55.473 [2024-10-09 08:06:57.355480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:55.473 [2024-10-09 08:06:57.355492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.473 [2024-10-09 08:06:57.355522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:55.473 [2024-10-09 08:06:57.355535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:26:55.473 [2024-10-09 08:06:57.355546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.473 [2024-10-09 08:06:57.372162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.473 [2024-10-09 08:06:57.372202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:55.473 [2024-10-09 08:06:57.372219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.572 ms 00:26:55.473 [2024-10-09 08:06:57.372239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.473 [2024-10-09 08:06:57.372711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.473 [2024-10-09 08:06:57.372739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:55.473 [2024-10-09 08:06:57.372753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:26:55.473 [2024-10-09 08:06:57.372764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.473 [2024-10-09 08:06:57.409919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.473 [2024-10-09 08:06:57.409976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.473 [2024-10-09 08:06:57.410000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.473 [2024-10-09 08:06:57.410012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.473 [2024-10-09 08:06:57.410089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.473 [2024-10-09 08:06:57.410104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.473 [2024-10-09 08:06:57.410116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.473 [2024-10-09 08:06:57.410127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.473 [2024-10-09 08:06:57.410219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.473 [2024-10-09 08:06:57.410238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.473 [2024-10-09 08:06:57.410251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.473 [2024-10-09 08:06:57.410269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.473 [2024-10-09 08:06:57.410291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.473 [2024-10-09 08:06:57.410304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.473 [2024-10-09 08:06:57.410315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.473 [2024-10-09 08:06:57.410327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-10-09 08:06:57.546549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.731 [2024-10-09 08:06:57.546606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.731 [2024-10-09 08:06:57.546632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.731 [2024-10-09 08:06:57.546644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-10-09 08:06:57.630837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.731 [2024-10-09 08:06:57.630903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:55.731 [2024-10-09 08:06:57.630921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.732 [2024-10-09 08:06:57.630933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.732 [2024-10-09 08:06:57.631034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.732 [2024-10-09 08:06:57.631051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:55.732 [2024-10-09 08:06:57.631063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.732 [2024-10-09 08:06:57.631074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.732 [2024-10-09 08:06:57.631129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.732 [2024-10-09 08:06:57.631143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:55.732 [2024-10-09 08:06:57.631155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.732 [2024-10-09 08:06:57.631166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.732 [2024-10-09 08:06:57.631290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.732 [2024-10-09 08:06:57.631320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:55.732 [2024-10-09 08:06:57.631352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.732 [2024-10-09 08:06:57.631365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.732 [2024-10-09 08:06:57.631420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.732 [2024-10-09 08:06:57.631444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:55.732 [2024-10-09 08:06:57.631456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.732 [2024-10-09 08:06:57.631468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.732 [2024-10-09 08:06:57.631512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.732 [2024-10-09 08:06:57.631527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:55.732 [2024-10-09 08:06:57.631539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.732 [2024-10-09 08:06:57.631549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.732 [2024-10-09 08:06:57.631607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.732 [2024-10-09 08:06:57.631624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:55.732 [2024-10-09 08:06:57.631637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.732 [2024-10-09 08:06:57.631648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.732 [2024-10-09 08:06:57.631798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.894 ms, result 0 00:26:57.116 00:26:57.116 00:26:57.116 08:06:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:59.018 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:59.018 08:07:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:59.019 08:07:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:59.019 08:07:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:59.019 08:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:59.277 08:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:59.535 08:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:59.535 08:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:59.535 08:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78922 00:26:59.535 08:07:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78922 ']' 00:26:59.535 08:07:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 78922 00:26:59.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78922) - No such process 00:26:59.535 Process with pid 78922 is not found 00:26:59.535 08:07:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 78922 is not found' 00:26:59.535 08:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:59.794 Remove shared memory files 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:59.794 00:26:59.794 real 3m44.921s 00:26:59.794 user 4m20.168s 00:26:59.794 sys 0m37.791s 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.794 08:07:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.794 ************************************ 00:26:59.794 END TEST ftl_dirty_shutdown 00:26:59.794 ************************************ 00:26:59.794 08:07:01 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.794 08:07:01 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:59.794 08:07:01 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.794 08:07:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:59.794 ************************************ 00:26:59.794 START TEST ftl_upgrade_shutdown 00:26:59.794 ************************************ 00:26:59.794 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.794 * Looking for test storage... 00:26:59.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.794 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:59.794 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:59.794 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:00.053 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:00.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.054 --rc genhtml_branch_coverage=1 00:27:00.054 --rc genhtml_function_coverage=1 00:27:00.054 --rc genhtml_legend=1 00:27:00.054 --rc geninfo_all_blocks=1 00:27:00.054 --rc geninfo_unexecuted_blocks=1 00:27:00.054 00:27:00.054 ' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:00.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.054 --rc genhtml_branch_coverage=1 00:27:00.054 --rc genhtml_function_coverage=1 00:27:00.054 --rc genhtml_legend=1 00:27:00.054 --rc geninfo_all_blocks=1 00:27:00.054 --rc geninfo_unexecuted_blocks=1 00:27:00.054 00:27:00.054 ' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:00.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.054 --rc genhtml_branch_coverage=1 00:27:00.054 --rc genhtml_function_coverage=1 00:27:00.054 --rc genhtml_legend=1 00:27:00.054 --rc geninfo_all_blocks=1 00:27:00.054 --rc geninfo_unexecuted_blocks=1 00:27:00.054 00:27:00.054 ' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:00.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.054 --rc genhtml_branch_coverage=1 00:27:00.054 --rc genhtml_function_coverage=1 00:27:00.054 --rc genhtml_legend=1 00:27:00.054 --rc geninfo_all_blocks=1 00:27:00.054 --rc geninfo_unexecuted_blocks=1 00:27:00.054 00:27:00.054 ' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81268 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81268 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81268 ']' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:00.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:00.054 08:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:00.054 [2024-10-09 08:07:02.018687] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:27:00.054 [2024-10-09 08:07:02.018875] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81268 ] 00:27:00.313 [2024-10-09 08:07:02.192962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.571 [2024-10-09 08:07:02.425183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:01.507 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:01.766 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:02.024 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:02.024 { 00:27:02.024 "name": "basen1", 00:27:02.024 "aliases": [ 00:27:02.024 "6b00d99e-99da-417b-9307-8f0c03c06e87" 00:27:02.024 ], 00:27:02.024 "product_name": "NVMe disk", 00:27:02.024 "block_size": 4096, 00:27:02.024 "num_blocks": 1310720, 00:27:02.024 "uuid": "6b00d99e-99da-417b-9307-8f0c03c06e87", 00:27:02.024 "numa_id": -1, 00:27:02.024 "assigned_rate_limits": { 00:27:02.024 "rw_ios_per_sec": 0, 00:27:02.024 "rw_mbytes_per_sec": 0, 00:27:02.024 "r_mbytes_per_sec": 0, 00:27:02.024 "w_mbytes_per_sec": 0 00:27:02.024 }, 00:27:02.024 "claimed": true, 00:27:02.024 "claim_type": "read_many_write_one", 00:27:02.024 "zoned": false, 00:27:02.024 "supported_io_types": { 00:27:02.024 "read": true, 00:27:02.024 "write": true, 00:27:02.024 "unmap": true, 00:27:02.024 "flush": true, 00:27:02.024 "reset": true, 00:27:02.024 "nvme_admin": true, 00:27:02.024 "nvme_io": true, 00:27:02.024 "nvme_io_md": false, 00:27:02.024 "write_zeroes": true, 00:27:02.024 "zcopy": false, 00:27:02.024 "get_zone_info": false, 00:27:02.024 "zone_management": false, 00:27:02.024 "zone_append": false, 00:27:02.024 "compare": true, 00:27:02.024 "compare_and_write": false, 00:27:02.024 "abort": true, 00:27:02.024 "seek_hole": false, 00:27:02.024 "seek_data": false, 00:27:02.024 "copy": true, 00:27:02.024 "nvme_iov_md": false 00:27:02.024 }, 00:27:02.024 "driver_specific": { 00:27:02.024 "nvme": [ 00:27:02.024 { 00:27:02.024 "pci_address": "0000:00:11.0", 00:27:02.024 "trid": { 00:27:02.024 "trtype": "PCIe", 00:27:02.024 "traddr": "0000:00:11.0" 00:27:02.024 }, 00:27:02.024 "ctrlr_data": { 00:27:02.024 "cntlid": 0, 00:27:02.024 "vendor_id": "0x1b36", 00:27:02.024 "model_number": "QEMU NVMe Ctrl", 00:27:02.024 "serial_number": "12341", 00:27:02.024 "firmware_revision": "8.0.0", 00:27:02.024 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:02.024 "oacs": { 00:27:02.024 "security": 0, 00:27:02.024 "format": 1, 00:27:02.024 "firmware": 0, 00:27:02.024 "ns_manage": 1 00:27:02.024 }, 00:27:02.024 "multi_ctrlr": false, 00:27:02.024 "ana_reporting": false 00:27:02.025 }, 00:27:02.025 "vs": { 00:27:02.025 "nvme_version": "1.4" 00:27:02.025 }, 00:27:02.025 "ns_data": { 00:27:02.025 "id": 1, 00:27:02.025 "can_share": false 00:27:02.025 } 00:27:02.025 } 00:27:02.025 ], 00:27:02.025 "mp_policy": "active_passive" 00:27:02.025 } 00:27:02.025 } 00:27:02.025 ]' 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:02.025 08:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:02.283 08:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=6b6005ad-9a72-4ab8-bc50-9aef5c064f4a 00:27:02.283 08:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:02.283 08:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b6005ad-9a72-4ab8-bc50-9aef5c064f4a 00:27:02.850 08:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:03.108 08:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7776f584-b678-425c-a0e0-dab18e19858b 00:27:03.108 08:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7776f584-b678-425c-a0e0-dab18e19858b 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5c1c5266-f35d-4f4c-97f8-62604b4a7303 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5c1c5266-f35d-4f4c-97f8-62604b4a7303 ]] 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5c1c5266-f35d-4f4c-97f8-62604b4a7303 5120 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5c1c5266-f35d-4f4c-97f8-62604b4a7303 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5c1c5266-f35d-4f4c-97f8-62604b4a7303 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5c1c5266-f35d-4f4c-97f8-62604b4a7303 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:03.366 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c1c5266-f35d-4f4c-97f8-62604b4a7303 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:03.625 { 00:27:03.625 "name": "5c1c5266-f35d-4f4c-97f8-62604b4a7303", 00:27:03.625 "aliases": [ 00:27:03.625 "lvs/basen1p0" 00:27:03.625 ], 00:27:03.625 "product_name": "Logical Volume", 00:27:03.625 "block_size": 4096, 00:27:03.625 "num_blocks": 5242880, 00:27:03.625 "uuid": "5c1c5266-f35d-4f4c-97f8-62604b4a7303", 00:27:03.625 "assigned_rate_limits": { 00:27:03.625 "rw_ios_per_sec": 0, 00:27:03.625 "rw_mbytes_per_sec": 0, 00:27:03.625 "r_mbytes_per_sec": 0, 00:27:03.625 "w_mbytes_per_sec": 0 00:27:03.625 }, 00:27:03.625 "claimed": false, 00:27:03.625 "zoned": false, 00:27:03.625 "supported_io_types": { 00:27:03.625 "read": true, 00:27:03.625 "write": true, 00:27:03.625 "unmap": true, 00:27:03.625 "flush": false, 00:27:03.625 "reset": true, 00:27:03.625 "nvme_admin": false, 00:27:03.625 "nvme_io": false, 00:27:03.625 "nvme_io_md": false, 00:27:03.625 "write_zeroes": true, 00:27:03.625 "zcopy": false, 00:27:03.625 "get_zone_info": false, 00:27:03.625 "zone_management": false, 00:27:03.625 "zone_append": false, 00:27:03.625 "compare": false, 00:27:03.625 "compare_and_write": false, 00:27:03.625 "abort": false, 00:27:03.625 "seek_hole": true, 00:27:03.625 "seek_data": true, 00:27:03.625 "copy": false, 00:27:03.625 "nvme_iov_md": false 00:27:03.625 }, 00:27:03.625 "driver_specific": { 00:27:03.625 "lvol": { 00:27:03.625 "lvol_store_uuid": "7776f584-b678-425c-a0e0-dab18e19858b", 00:27:03.625 "base_bdev": "basen1", 00:27:03.625 "thin_provision": true, 00:27:03.625 "num_allocated_clusters": 0, 00:27:03.625 "snapshot": false, 00:27:03.625 "clone": false, 00:27:03.625 "esnap_clone": false 00:27:03.625 } 00:27:03.625 } 00:27:03.625 } 00:27:03.625 ]' 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:03.625 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:04.191 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:04.191 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:04.191 08:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:04.449 08:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:04.449 08:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:04.449 08:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5c1c5266-f35d-4f4c-97f8-62604b4a7303 -c cachen1p0 --l2p_dram_limit 2 00:27:04.708 [2024-10-09 08:07:06.492240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.708 [2024-10-09 08:07:06.492306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:04.708 [2024-10-09 08:07:06.492355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:04.708 [2024-10-09 08:07:06.492374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.708 [2024-10-09 08:07:06.492459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.708 [2024-10-09 08:07:06.492479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:04.708 [2024-10-09 08:07:06.492495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:04.708 [2024-10-09 08:07:06.492509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.708 [2024-10-09 08:07:06.492545] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:04.708 [2024-10-09 08:07:06.493538] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:04.708 [2024-10-09 08:07:06.493574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.708 [2024-10-09 08:07:06.493588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:04.708 [2024-10-09 08:07:06.493606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:27:04.708 [2024-10-09 08:07:06.493622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.708 [2024-10-09 08:07:06.493767] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5c9ffe53-3028-496f-aeb2-381db4ce6ed6 00:27:04.708 [2024-10-09 08:07:06.494793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.708 [2024-10-09 08:07:06.494840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:04.708 [2024-10-09 08:07:06.494858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:04.708 [2024-10-09 08:07:06.494876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.708 [2024-10-09 08:07:06.499571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.709 [2024-10-09 08:07:06.499627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:04.709 [2024-10-09 08:07:06.499647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.632 ms 00:27:04.709 [2024-10-09 08:07:06.499663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.709 [2024-10-09 08:07:06.499760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.709 [2024-10-09 08:07:06.499787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:04.709 [2024-10-09 08:07:06.499802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:04.709 [2024-10-09 08:07:06.499824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.709 [2024-10-09 08:07:06.499924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.709 [2024-10-09 08:07:06.499949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:04.709 [2024-10-09 08:07:06.499964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:04.709 [2024-10-09 08:07:06.499980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.709 [2024-10-09 08:07:06.500014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:04.709 [2024-10-09 08:07:06.504654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.709 [2024-10-09 08:07:06.504691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:04.709 [2024-10-09 08:07:06.504710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.645 ms 00:27:04.709 [2024-10-09 08:07:06.504724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.709 [2024-10-09 08:07:06.504768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.709 [2024-10-09 08:07:06.504786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:04.709 [2024-10-09 08:07:06.504802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:04.709 [2024-10-09 08:07:06.504818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.709 [2024-10-09 08:07:06.504869] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:04.709 [2024-10-09 08:07:06.505028] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:04.709 [2024-10-09 08:07:06.505052] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:04.709 [2024-10-09 08:07:06.505069] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:04.709 [2024-10-09 08:07:06.505091] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:04.709 [2024-10-09 08:07:06.505106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:04.709 [2024-10-09 08:07:06.505122] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:04.709 [2024-10-09 08:07:06.505134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:04.709 [2024-10-09 08:07:06.505147] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:04.709 [2024-10-09 08:07:06.505159] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:04.709 [2024-10-09 08:07:06.505174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.709 [2024-10-09 08:07:06.505186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:04.709 [2024-10-09 08:07:06.505202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:27:04.709 [2024-10-09 08:07:06.505215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.709 [2024-10-09 08:07:06.505314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.709 [2024-10-09 08:07:06.505364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:04.709 [2024-10-09 08:07:06.505382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:27:04.709 [2024-10-09 08:07:06.505395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.709 [2024-10-09 08:07:06.505511] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:04.709 [2024-10-09 08:07:06.505544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:04.709 [2024-10-09 08:07:06.505562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.709 [2024-10-09 08:07:06.505576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.505592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:04.709 [2024-10-09 08:07:06.505604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.505619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:04.709 [2024-10-09 08:07:06.505632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:04.709 [2024-10-09 08:07:06.505645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:04.709 [2024-10-09 08:07:06.505660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.505674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:04.709 [2024-10-09 08:07:06.505687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:04.709 [2024-10-09 08:07:06.505701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.505713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:04.709 [2024-10-09 08:07:06.505737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:04.709 [2024-10-09 08:07:06.505749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.505766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:04.709 [2024-10-09 08:07:06.505778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:04.709 [2024-10-09 08:07:06.505792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.505804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:04.709 [2024-10-09 08:07:06.505820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:04.709 [2024-10-09 08:07:06.505832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.709 [2024-10-09 08:07:06.505846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:04.709 [2024-10-09 08:07:06.505858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:04.709 [2024-10-09 08:07:06.505872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.709 [2024-10-09 08:07:06.505884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:04.709 [2024-10-09 08:07:06.505898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:04.709 [2024-10-09 08:07:06.505910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.709 [2024-10-09 08:07:06.505924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:04.709 [2024-10-09 08:07:06.505936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:04.709 [2024-10-09 08:07:06.505951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.709 [2024-10-09 08:07:06.505962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:04.709 [2024-10-09 08:07:06.505978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:04.709 [2024-10-09 08:07:06.505991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.506005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:04.709 [2024-10-09 08:07:06.506016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:04.709 [2024-10-09 08:07:06.506030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.506043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:04.709 [2024-10-09 08:07:06.506057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:04.709 [2024-10-09 08:07:06.506069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.506083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:04.709 [2024-10-09 08:07:06.506096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:04.709 [2024-10-09 08:07:06.506110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.506122] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:04.709 [2024-10-09 08:07:06.506137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:04.709 [2024-10-09 08:07:06.506152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.709 [2024-10-09 08:07:06.506170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.709 [2024-10-09 08:07:06.506183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:04.709 [2024-10-09 08:07:06.506200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:04.709 [2024-10-09 08:07:06.506212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:04.709 [2024-10-09 08:07:06.506227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:04.709 [2024-10-09 08:07:06.506239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:04.709 [2024-10-09 08:07:06.506253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:04.709 [2024-10-09 08:07:06.506270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:04.709 [2024-10-09 08:07:06.506288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.709 [2024-10-09 08:07:06.506302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:04.709 [2024-10-09 08:07:06.506324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:04.709 [2024-10-09 08:07:06.506352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:04.709 [2024-10-09 08:07:06.506368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:04.709 [2024-10-09 08:07:06.506381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:04.709 [2024-10-09 08:07:06.506396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:04.709 [2024-10-09 08:07:06.506408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:04.709 [2024-10-09 08:07:06.506423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:04.709 [2024-10-09 08:07:06.506435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:04.709 [2024-10-09 08:07:06.506452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:04.709 [2024-10-09 08:07:06.506464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:04.710 [2024-10-09 08:07:06.506478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:04.710 [2024-10-09 08:07:06.506492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:04.710 [2024-10-09 08:07:06.506507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:04.710 [2024-10-09 08:07:06.506519] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:04.710 [2024-10-09 08:07:06.506537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.710 [2024-10-09 08:07:06.506551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.710 [2024-10-09 08:07:06.506566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:04.710 [2024-10-09 08:07:06.506580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:04.710 [2024-10-09 08:07:06.506594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:04.710 [2024-10-09 08:07:06.506609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.710 [2024-10-09 08:07:06.506623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:04.710 [2024-10-09 08:07:06.506637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.173 ms 00:27:04.710 [2024-10-09 08:07:06.506652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.710 [2024-10-09 08:07:06.506711] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:04.710 [2024-10-09 08:07:06.506735] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:06.611 [2024-10-09 08:07:08.512672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.512741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:06.611 [2024-10-09 08:07:08.512764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2005.973 ms 00:27:06.611 [2024-10-09 08:07:08.512782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.611 [2024-10-09 08:07:08.545728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.545836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:06.611 [2024-10-09 08:07:08.545858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.677 ms 00:27:06.611 [2024-10-09 08:07:08.545873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.611 [2024-10-09 08:07:08.546009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.546035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:06.611 [2024-10-09 08:07:08.546050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:06.611 [2024-10-09 08:07:08.546067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.611 [2024-10-09 08:07:08.597881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.597958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:06.611 [2024-10-09 08:07:08.597983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.734 ms 00:27:06.611 [2024-10-09 08:07:08.598000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.611 [2024-10-09 08:07:08.598071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.598093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:06.611 [2024-10-09 08:07:08.598125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:06.611 [2024-10-09 08:07:08.598140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.611 [2024-10-09 08:07:08.598572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.598599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:06.611 [2024-10-09 08:07:08.598625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:27:06.611 [2024-10-09 08:07:08.598645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.611 [2024-10-09 08:07:08.598701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.598736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:06.611 [2024-10-09 08:07:08.598750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:06.611 [2024-10-09 08:07:08.598766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.611 [2024-10-09 08:07:08.616109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.611 [2024-10-09 08:07:08.616168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:06.611 [2024-10-09 08:07:08.616188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.315 ms 00:27:06.611 [2024-10-09 08:07:08.616203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.630699] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:06.873 [2024-10-09 08:07:08.631627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.631659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:06.873 [2024-10-09 08:07:08.631691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.305 ms 00:27:06.873 [2024-10-09 08:07:08.631720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.657577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.657664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:06.873 [2024-10-09 08:07:08.657694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.800 ms 00:27:06.873 [2024-10-09 08:07:08.657709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.657822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.657844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:06.873 [2024-10-09 08:07:08.657865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:06.873 [2024-10-09 08:07:08.657879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.689415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.689473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:06.873 [2024-10-09 08:07:08.689496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.460 ms 00:27:06.873 [2024-10-09 08:07:08.689510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.720725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.720773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:06.873 [2024-10-09 08:07:08.720796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.152 ms 00:27:06.873 [2024-10-09 08:07:08.720810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.721559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.721592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:06.873 [2024-10-09 08:07:08.721612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.693 ms 00:27:06.873 [2024-10-09 08:07:08.721626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.806170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.806239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:06.873 [2024-10-09 08:07:08.806268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.464 ms 00:27:06.873 [2024-10-09 08:07:08.806287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.839611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.839713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:06.873 [2024-10-09 08:07:08.839751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.169 ms 00:27:06.873 [2024-10-09 08:07:08.839775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.873 [2024-10-09 08:07:08.872631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.873 [2024-10-09 08:07:08.872682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:06.873 [2024-10-09 08:07:08.872706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.747 ms 00:27:06.873 [2024-10-09 08:07:08.872720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.134 [2024-10-09 08:07:08.904268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.134 [2024-10-09 08:07:08.904314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:07.134 [2024-10-09 08:07:08.904357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.485 ms 00:27:07.134 [2024-10-09 08:07:08.904375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.134 [2024-10-09 08:07:08.904439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.134 [2024-10-09 08:07:08.904460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:07.134 [2024-10-09 08:07:08.904480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:07.134 [2024-10-09 08:07:08.904496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.134 [2024-10-09 08:07:08.904624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.134 [2024-10-09 08:07:08.904645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:07.134 [2024-10-09 08:07:08.904661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:07.134 [2024-10-09 08:07:08.904675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.134 [2024-10-09 08:07:08.905739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2413.000 ms, result 0 00:27:07.134 { 00:27:07.134 "name": "ftl", 00:27:07.134 "uuid": "5c9ffe53-3028-496f-aeb2-381db4ce6ed6" 00:27:07.134 } 00:27:07.134 08:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:07.393 [2024-10-09 08:07:09.225097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.393 08:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:07.651 08:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:07.910 [2024-10-09 08:07:09.837860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:07.910 08:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:08.168 [2024-10-09 08:07:10.103546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:08.168 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:08.738 Fill FTL, iteration 1 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81391 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81391 /var/tmp/spdk.tgt.sock 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81391 ']' 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:08.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:08.738 08:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:08.738 [2024-10-09 08:07:10.644219] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:27:08.738 [2024-10-09 08:07:10.644392] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81391 ] 00:27:09.030 [2024-10-09 08:07:10.810088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.030 [2024-10-09 08:07:11.027391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.966 08:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.966 08:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:09.966 08:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:10.224 ftln1 00:27:10.224 08:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:10.224 08:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81391 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81391 ']' 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81391 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81391 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:10.482 killing process with pid 81391 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81391' 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81391 00:27:10.482 08:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81391 00:27:13.015 08:07:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:13.015 08:07:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:13.015 [2024-10-09 08:07:14.691889] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:27:13.015 [2024-10-09 08:07:14.692051] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81444 ] 00:27:13.015 [2024-10-09 08:07:14.867103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.280 [2024-10-09 08:07:15.095307] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.675  [2024-10-09T08:07:17.623Z] Copying: 209/1024 [MB] (209 MBps) [2024-10-09T08:07:18.560Z] Copying: 419/1024 [MB] (210 MBps) [2024-10-09T08:07:19.960Z] Copying: 630/1024 [MB] (211 MBps) [2024-10-09T08:07:20.525Z] Copying: 841/1024 [MB] (211 MBps) [2024-10-09T08:07:21.899Z] Copying: 1024/1024 [MB] (average 210 MBps) 00:27:19.887 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:19.887 Calculate MD5 checksum, iteration 1 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:19.887 08:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:19.887 [2024-10-09 08:07:21.602735] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:27:19.887 [2024-10-09 08:07:21.602903] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81514 ] 00:27:19.887 [2024-10-09 08:07:21.778438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.146 [2024-10-09 08:07:22.005865] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.520  [2024-10-09T08:07:24.494Z] Copying: 525/1024 [MB] (525 MBps) [2024-10-09T08:07:24.494Z] Copying: 995/1024 [MB] (470 MBps) [2024-10-09T08:07:25.869Z] Copying: 1024/1024 [MB] (average 495 MBps) 00:27:23.857 00:27:23.857 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:23.857 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:25.756 Fill FTL, iteration 2 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d4eceb69671382d5b78cf74e27910d12 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:25.756 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:26.015 [2024-10-09 08:07:27.796896] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:27:26.015 [2024-10-09 08:07:27.797234] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81581 ] 00:27:26.015 [2024-10-09 08:07:27.959857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.273 [2024-10-09 08:07:28.185727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.671  [2024-10-09T08:07:31.056Z] Copying: 212/1024 [MB] (212 MBps) [2024-10-09T08:07:31.990Z] Copying: 422/1024 [MB] (210 MBps) [2024-10-09T08:07:32.926Z] Copying: 634/1024 [MB] (212 MBps) [2024-10-09T08:07:33.550Z] Copying: 846/1024 [MB] (212 MBps) [2024-10-09T08:07:34.924Z] Copying: 1024/1024 [MB] (average 210 MBps) 00:27:32.912 00:27:32.912 Calculate MD5 checksum, iteration 2 00:27:32.912 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:32.912 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:32.912 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:32.913 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:32.913 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:32.913 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:32.913 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:32.913 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:32.913 [2024-10-09 08:07:34.769255] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:27:32.913 [2024-10-09 08:07:34.769438] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81647 ] 00:27:33.171 [2024-10-09 08:07:34.941570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.171 [2024-10-09 08:07:35.131489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.070  [2024-10-09T08:07:38.042Z] Copying: 486/1024 [MB] (486 MBps) [2024-10-09T08:07:38.042Z] Copying: 934/1024 [MB] (448 MBps) [2024-10-09T08:07:39.418Z] Copying: 1024/1024 [MB] (average 462 MBps) 00:27:37.406 00:27:37.406 08:07:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:37.406 08:07:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:39.933 08:07:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:39.933 08:07:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7a9dda5ea0a710b4ef357bb84b77d879 00:27:39.933 08:07:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:39.933 08:07:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:39.933 08:07:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:39.933 [2024-10-09 08:07:41.809186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.933 [2024-10-09 08:07:41.809257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:39.933 [2024-10-09 08:07:41.809279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:39.933 [2024-10-09 08:07:41.809299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.933 [2024-10-09 08:07:41.809356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.933 [2024-10-09 08:07:41.809376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:39.933 [2024-10-09 08:07:41.809390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:39.933 [2024-10-09 08:07:41.809403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.933 [2024-10-09 08:07:41.809434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.933 [2024-10-09 08:07:41.809450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:39.933 [2024-10-09 08:07:41.809462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:39.933 [2024-10-09 08:07:41.809474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.933 [2024-10-09 08:07:41.809559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.370 ms, result 0 00:27:39.933 true 00:27:39.933 08:07:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:40.192 { 00:27:40.192 "name": "ftl", 00:27:40.192 "properties": [ 00:27:40.192 { 00:27:40.192 "name": "superblock_version", 00:27:40.192 "value": 5, 00:27:40.192 "read-only": true 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "name": "base_device", 00:27:40.192 "bands": [ 00:27:40.192 { 00:27:40.192 "id": 0, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 1, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 2, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 3, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 4, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 5, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 6, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 7, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 8, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 9, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 10, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 11, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 12, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 13, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 14, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 15, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 16, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 17, 00:27:40.192 "state": "FREE", 00:27:40.192 "validity": 0.0 00:27:40.192 } 00:27:40.192 ], 00:27:40.192 "read-only": true 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "name": "cache_device", 00:27:40.192 "type": "bdev", 00:27:40.192 "chunks": [ 00:27:40.192 { 00:27:40.192 "id": 0, 00:27:40.192 "state": "INACTIVE", 00:27:40.192 "utilization": 0.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 1, 00:27:40.192 "state": "CLOSED", 00:27:40.192 "utilization": 1.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 2, 00:27:40.192 "state": "CLOSED", 00:27:40.192 "utilization": 1.0 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 3, 00:27:40.192 "state": "OPEN", 00:27:40.192 "utilization": 0.001953125 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "id": 4, 00:27:40.192 "state": "OPEN", 00:27:40.192 "utilization": 0.0 00:27:40.192 } 00:27:40.192 ], 00:27:40.192 "read-only": true 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "name": "verbose_mode", 00:27:40.192 "value": true, 00:27:40.192 "unit": "", 00:27:40.192 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:40.192 }, 00:27:40.192 { 00:27:40.192 "name": "prep_upgrade_on_shutdown", 00:27:40.192 "value": false, 00:27:40.192 "unit": "", 00:27:40.192 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:40.192 } 00:27:40.192 ] 00:27:40.192 } 00:27:40.192 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:40.451 [2024-10-09 08:07:42.401897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.451 [2024-10-09 08:07:42.401958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:40.451 [2024-10-09 08:07:42.401980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:40.451 [2024-10-09 08:07:42.401993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.451 [2024-10-09 08:07:42.402057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.451 [2024-10-09 08:07:42.402076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:40.451 [2024-10-09 08:07:42.402090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:40.451 [2024-10-09 08:07:42.402102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.451 [2024-10-09 08:07:42.402131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.451 [2024-10-09 08:07:42.402146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:40.451 [2024-10-09 08:07:42.402159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:40.451 [2024-10-09 08:07:42.402170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.451 [2024-10-09 08:07:42.402248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.337 ms, result 0 00:27:40.451 true 00:27:40.451 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:40.451 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:40.451 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:40.709 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:40.709 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:40.709 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:40.968 [2024-10-09 08:07:42.910592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.968 [2024-10-09 08:07:42.910645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:40.968 [2024-10-09 08:07:42.910666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:40.968 [2024-10-09 08:07:42.910680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.968 [2024-10-09 08:07:42.910716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.968 [2024-10-09 08:07:42.910732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:40.968 [2024-10-09 08:07:42.910745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:40.968 [2024-10-09 08:07:42.910757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.968 [2024-10-09 08:07:42.910786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.968 [2024-10-09 08:07:42.910802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:40.968 [2024-10-09 08:07:42.910814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:40.968 [2024-10-09 08:07:42.910826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.968 [2024-10-09 08:07:42.910902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.297 ms, result 0 00:27:40.968 true 00:27:40.968 08:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:41.226 { 00:27:41.226 "name": "ftl", 00:27:41.226 "properties": [ 00:27:41.226 { 00:27:41.226 "name": "superblock_version", 00:27:41.226 "value": 5, 00:27:41.226 "read-only": true 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "name": "base_device", 00:27:41.226 "bands": [ 00:27:41.226 { 00:27:41.226 "id": 0, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 1, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 2, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 3, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 4, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 5, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 6, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 7, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 8, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 9, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 10, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 11, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 12, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 13, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 14, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 15, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.226 "id": 16, 00:27:41.226 "state": "FREE", 00:27:41.226 "validity": 0.0 00:27:41.226 }, 00:27:41.226 { 00:27:41.227 "id": 17, 00:27:41.227 "state": "FREE", 00:27:41.227 "validity": 0.0 00:27:41.227 } 00:27:41.227 ], 00:27:41.227 "read-only": true 00:27:41.227 }, 00:27:41.227 { 00:27:41.227 "name": "cache_device", 00:27:41.227 "type": "bdev", 00:27:41.227 "chunks": [ 00:27:41.227 { 00:27:41.227 "id": 0, 00:27:41.227 "state": "INACTIVE", 00:27:41.227 "utilization": 0.0 00:27:41.227 }, 00:27:41.227 { 00:27:41.227 "id": 1, 00:27:41.227 "state": "CLOSED", 00:27:41.227 "utilization": 1.0 00:27:41.227 }, 00:27:41.227 { 00:27:41.227 "id": 2, 00:27:41.227 "state": "CLOSED", 00:27:41.227 "utilization": 1.0 00:27:41.227 }, 00:27:41.227 { 00:27:41.227 "id": 3, 00:27:41.227 "state": "OPEN", 00:27:41.227 "utilization": 0.001953125 00:27:41.227 }, 00:27:41.227 { 00:27:41.227 "id": 4, 00:27:41.227 "state": "OPEN", 00:27:41.227 "utilization": 0.0 00:27:41.227 } 00:27:41.227 ], 00:27:41.227 "read-only": true 00:27:41.227 }, 00:27:41.227 { 00:27:41.227 "name": "verbose_mode", 00:27:41.227 "value": true, 00:27:41.227 "unit": "", 00:27:41.227 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:41.227 }, 00:27:41.227 { 00:27:41.227 "name": "prep_upgrade_on_shutdown", 00:27:41.227 "value": true, 00:27:41.227 "unit": "", 00:27:41.227 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:41.227 } 00:27:41.227 ] 00:27:41.227 } 00:27:41.227 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:41.227 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81268 ]] 00:27:41.227 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81268 00:27:41.227 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81268 ']' 00:27:41.227 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81268 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81268 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81268' 00:27:41.485 killing process with pid 81268 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81268 00:27:41.485 08:07:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81268 00:27:42.419 [2024-10-09 08:07:44.228533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:42.419 [2024-10-09 08:07:44.246812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.419 [2024-10-09 08:07:44.246865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:42.419 [2024-10-09 08:07:44.246887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:42.419 [2024-10-09 08:07:44.246900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.419 [2024-10-09 08:07:44.246949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:42.419 [2024-10-09 08:07:44.250344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.419 [2024-10-09 08:07:44.250397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:42.419 [2024-10-09 08:07:44.250414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.370 ms 00:27:42.419 [2024-10-09 08:07:44.250427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.902885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.902970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:52.390 [2024-10-09 08:07:52.903009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8652.479 ms 00:27:52.390 [2024-10-09 08:07:52.903023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.904309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.904361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:52.390 [2024-10-09 08:07:52.904379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.257 ms 00:27:52.390 [2024-10-09 08:07:52.904391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.905693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.905748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:52.390 [2024-10-09 08:07:52.905792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.258 ms 00:27:52.390 [2024-10-09 08:07:52.905805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.918709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.918754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:52.390 [2024-10-09 08:07:52.918772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.859 ms 00:27:52.390 [2024-10-09 08:07:52.918785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.926601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.926649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:52.390 [2024-10-09 08:07:52.926667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.764 ms 00:27:52.390 [2024-10-09 08:07:52.926681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.926803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.926825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:52.390 [2024-10-09 08:07:52.926839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:27:52.390 [2024-10-09 08:07:52.926851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.939733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.939778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:52.390 [2024-10-09 08:07:52.939796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.857 ms 00:27:52.390 [2024-10-09 08:07:52.939809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.952497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.390 [2024-10-09 08:07:52.952564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:52.390 [2024-10-09 08:07:52.952595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.643 ms 00:27:52.390 [2024-10-09 08:07:52.952607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.390 [2024-10-09 08:07:52.965091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.391 [2024-10-09 08:07:52.965130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:52.391 [2024-10-09 08:07:52.965161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.443 ms 00:27:52.391 [2024-10-09 08:07:52.965188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:52.978143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.391 [2024-10-09 08:07:52.978203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:52.391 [2024-10-09 08:07:52.978238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.857 ms 00:27:52.391 [2024-10-09 08:07:52.978250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:52.978295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:52.391 [2024-10-09 08:07:52.978319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:52.391 [2024-10-09 08:07:52.978347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:52.391 [2024-10-09 08:07:52.978363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:52.391 [2024-10-09 08:07:52.978376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:52.391 [2024-10-09 08:07:52.978596] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:52.391 [2024-10-09 08:07:52.978607] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5c9ffe53-3028-496f-aeb2-381db4ce6ed6 00:27:52.391 [2024-10-09 08:07:52.978619] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:52.391 [2024-10-09 08:07:52.978641] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:52.391 [2024-10-09 08:07:52.978658] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:52.391 [2024-10-09 08:07:52.978671] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:52.391 [2024-10-09 08:07:52.978682] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:52.391 [2024-10-09 08:07:52.978697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:52.391 [2024-10-09 08:07:52.978708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:52.391 [2024-10-09 08:07:52.978718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:52.391 [2024-10-09 08:07:52.978729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:52.391 [2024-10-09 08:07:52.978741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.391 [2024-10-09 08:07:52.978753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:52.391 [2024-10-09 08:07:52.978765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.448 ms 00:27:52.391 [2024-10-09 08:07:52.978776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:52.995666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.391 [2024-10-09 08:07:52.995734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:52.391 [2024-10-09 08:07:52.995752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.843 ms 00:27:52.391 [2024-10-09 08:07:52.995765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:52.996216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.391 [2024-10-09 08:07:52.996242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:52.391 [2024-10-09 08:07:52.996257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:27:52.391 [2024-10-09 08:07:52.996269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.045888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.045991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:52.391 [2024-10-09 08:07:53.046028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.046041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.046101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.046117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:52.391 [2024-10-09 08:07:53.046130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.046146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.046284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.046311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:52.391 [2024-10-09 08:07:53.046324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.046336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.046379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.046395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:52.391 [2024-10-09 08:07:53.046408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.046420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.151023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.151095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:52.391 [2024-10-09 08:07:53.151130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.151142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.235070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.235151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:52.391 [2024-10-09 08:07:53.235186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.235198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.235319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.235338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:52.391 [2024-10-09 08:07:53.235376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.235388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.235484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.235502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:52.391 [2024-10-09 08:07:53.235516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.235527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.235670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.235723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:52.391 [2024-10-09 08:07:53.235738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.235758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.235812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.235830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:52.391 [2024-10-09 08:07:53.235843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.235854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.235909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.235927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:52.391 [2024-10-09 08:07:53.235940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.235959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.236013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.391 [2024-10-09 08:07:53.236049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:52.391 [2024-10-09 08:07:53.236064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.391 [2024-10-09 08:07:53.236076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.391 [2024-10-09 08:07:53.236234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8989.449 ms, result 0 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81872 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81872 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81872 ']' 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.673 08:07:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.673 [2024-10-09 08:07:57.623277] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:27:55.673 [2024-10-09 08:07:57.623435] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81872 ] 00:27:55.931 [2024-10-09 08:07:57.786567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.188 [2024-10-09 08:07:57.977192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.157 [2024-10-09 08:07:58.835250] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.157 [2024-10-09 08:07:58.835324] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.157 [2024-10-09 08:07:58.983476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:58.983531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:57.157 [2024-10-09 08:07:58.983556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:57.157 [2024-10-09 08:07:58.983568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:58.983641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:58.983661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:57.157 [2024-10-09 08:07:58.983675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:57.157 [2024-10-09 08:07:58.983698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:58.983748] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:57.157 [2024-10-09 08:07:58.984689] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:57.157 [2024-10-09 08:07:58.984725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:58.984739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:57.157 [2024-10-09 08:07:58.984752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.997 ms 00:27:57.157 [2024-10-09 08:07:58.984769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:58.985979] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:57.157 [2024-10-09 08:07:59.003052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.003097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:57.157 [2024-10-09 08:07:59.003116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.075 ms 00:27:57.157 [2024-10-09 08:07:59.003128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.003213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.003235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:57.157 [2024-10-09 08:07:59.003249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:57.157 [2024-10-09 08:07:59.003261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.007916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.007970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:57.157 [2024-10-09 08:07:59.007987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.500 ms 00:27:57.157 [2024-10-09 08:07:59.007999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.008095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.008117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:57.157 [2024-10-09 08:07:59.008130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:27:57.157 [2024-10-09 08:07:59.008147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.008223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.008242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:57.157 [2024-10-09 08:07:59.008255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:57.157 [2024-10-09 08:07:59.008266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.008306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:57.157 [2024-10-09 08:07:59.012939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.012978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:57.157 [2024-10-09 08:07:59.012995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.643 ms 00:27:57.157 [2024-10-09 08:07:59.013007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.013046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.013064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:57.157 [2024-10-09 08:07:59.013083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:57.157 [2024-10-09 08:07:59.013094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.013150] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:57.157 [2024-10-09 08:07:59.013198] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:57.157 [2024-10-09 08:07:59.013255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:57.157 [2024-10-09 08:07:59.013277] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:57.157 [2024-10-09 08:07:59.013411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:57.157 [2024-10-09 08:07:59.013437] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:57.157 [2024-10-09 08:07:59.013452] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:57.157 [2024-10-09 08:07:59.013468] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:57.157 [2024-10-09 08:07:59.013482] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:57.157 [2024-10-09 08:07:59.013495] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:57.157 [2024-10-09 08:07:59.013507] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:57.157 [2024-10-09 08:07:59.013524] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:57.157 [2024-10-09 08:07:59.013535] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:57.157 [2024-10-09 08:07:59.013548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.013560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:57.157 [2024-10-09 08:07:59.013573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:27:57.157 [2024-10-09 08:07:59.013589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.013699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.157 [2024-10-09 08:07:59.013718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:57.157 [2024-10-09 08:07:59.013731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:27:57.157 [2024-10-09 08:07:59.013743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.157 [2024-10-09 08:07:59.013890] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:57.157 [2024-10-09 08:07:59.013911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:57.157 [2024-10-09 08:07:59.013925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:57.157 [2024-10-09 08:07:59.013937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.157 [2024-10-09 08:07:59.013956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:57.157 [2024-10-09 08:07:59.013967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:57.157 [2024-10-09 08:07:59.013978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:57.157 [2024-10-09 08:07:59.013989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:57.157 [2024-10-09 08:07:59.014001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:57.157 [2024-10-09 08:07:59.014012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.157 [2024-10-09 08:07:59.014023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:57.157 [2024-10-09 08:07:59.014034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:57.157 [2024-10-09 08:07:59.014045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.157 [2024-10-09 08:07:59.014056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:57.157 [2024-10-09 08:07:59.014067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:57.157 [2024-10-09 08:07:59.014078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.157 [2024-10-09 08:07:59.014095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:57.157 [2024-10-09 08:07:59.014106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:57.157 [2024-10-09 08:07:59.014117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.157 [2024-10-09 08:07:59.014129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:57.157 [2024-10-09 08:07:59.014139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:57.157 [2024-10-09 08:07:59.014152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.157 [2024-10-09 08:07:59.014171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:57.157 [2024-10-09 08:07:59.014192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:57.158 [2024-10-09 08:07:59.014218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.158 [2024-10-09 08:07:59.014230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:57.158 [2024-10-09 08:07:59.014242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:57.158 [2024-10-09 08:07:59.014252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.158 [2024-10-09 08:07:59.014263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:57.158 [2024-10-09 08:07:59.014274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:57.158 [2024-10-09 08:07:59.014285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.158 [2024-10-09 08:07:59.014295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:57.158 [2024-10-09 08:07:59.014306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:57.158 [2024-10-09 08:07:59.014317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.158 [2024-10-09 08:07:59.014343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:57.158 [2024-10-09 08:07:59.014358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:57.158 [2024-10-09 08:07:59.014369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.158 [2024-10-09 08:07:59.014380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:57.158 [2024-10-09 08:07:59.014391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:57.158 [2024-10-09 08:07:59.014402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.158 [2024-10-09 08:07:59.014413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:57.158 [2024-10-09 08:07:59.014423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:57.158 [2024-10-09 08:07:59.014434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.158 [2024-10-09 08:07:59.014444] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:57.158 [2024-10-09 08:07:59.014457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:57.158 [2024-10-09 08:07:59.014470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:57.158 [2024-10-09 08:07:59.014481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.158 [2024-10-09 08:07:59.014493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:57.158 [2024-10-09 08:07:59.014508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:57.158 [2024-10-09 08:07:59.014520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:57.158 [2024-10-09 08:07:59.014531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:57.158 [2024-10-09 08:07:59.014542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:57.158 [2024-10-09 08:07:59.014553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:57.158 [2024-10-09 08:07:59.014566] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:57.158 [2024-10-09 08:07:59.014581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:57.158 [2024-10-09 08:07:59.014607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:57.158 [2024-10-09 08:07:59.014642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:57.158 [2024-10-09 08:07:59.014653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:57.158 [2024-10-09 08:07:59.014666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:57.158 [2024-10-09 08:07:59.014677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:57.158 [2024-10-09 08:07:59.014761] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:57.158 [2024-10-09 08:07:59.014774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:57.158 [2024-10-09 08:07:59.014800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:57.158 [2024-10-09 08:07:59.014812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:57.158 [2024-10-09 08:07:59.014823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:57.158 [2024-10-09 08:07:59.014837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.158 [2024-10-09 08:07:59.014849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:57.158 [2024-10-09 08:07:59.014862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.019 ms 00:27:57.158 [2024-10-09 08:07:59.014879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.158 [2024-10-09 08:07:59.014943] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:57.158 [2024-10-09 08:07:59.014965] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:59.058 [2024-10-09 08:08:00.995688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.058 [2024-10-09 08:08:00.995777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:59.058 [2024-10-09 08:08:00.995800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1980.750 ms 00:27:59.058 [2024-10-09 08:08:00.995824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.058 [2024-10-09 08:08:01.028201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.058 [2024-10-09 08:08:01.028265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:59.058 [2024-10-09 08:08:01.028286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.101 ms 00:27:59.058 [2024-10-09 08:08:01.028299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.058 [2024-10-09 08:08:01.028464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.058 [2024-10-09 08:08:01.028487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:59.058 [2024-10-09 08:08:01.028501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:59.058 [2024-10-09 08:08:01.028513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.075059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.075131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:59.317 [2024-10-09 08:08:01.075151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.478 ms 00:27:59.317 [2024-10-09 08:08:01.075163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.075254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.075273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:59.317 [2024-10-09 08:08:01.075286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:59.317 [2024-10-09 08:08:01.075298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.075742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.075771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:59.317 [2024-10-09 08:08:01.075786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.339 ms 00:27:59.317 [2024-10-09 08:08:01.075798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.075866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.075884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:59.317 [2024-10-09 08:08:01.075898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:59.317 [2024-10-09 08:08:01.075909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.093664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.093724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:59.317 [2024-10-09 08:08:01.093744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.684 ms 00:27:59.317 [2024-10-09 08:08:01.093757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.110355] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:59.317 [2024-10-09 08:08:01.110407] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.317 [2024-10-09 08:08:01.110429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.110442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:59.317 [2024-10-09 08:08:01.110456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.464 ms 00:27:59.317 [2024-10-09 08:08:01.110468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.128546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.128592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:59.317 [2024-10-09 08:08:01.128611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.011 ms 00:27:59.317 [2024-10-09 08:08:01.128623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.144147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.144191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:59.317 [2024-10-09 08:08:01.144208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.448 ms 00:27:59.317 [2024-10-09 08:08:01.144220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.159676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.159727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:59.317 [2024-10-09 08:08:01.159744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.394 ms 00:27:59.317 [2024-10-09 08:08:01.159756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.160596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.160632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:59.317 [2024-10-09 08:08:01.160648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.701 ms 00:27:59.317 [2024-10-09 08:08:01.160660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.233845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.233916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:59.317 [2024-10-09 08:08:01.233936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.153 ms 00:27:59.317 [2024-10-09 08:08:01.233948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.246809] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.317 [2024-10-09 08:08:01.247825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.247862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.317 [2024-10-09 08:08:01.247888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.793 ms 00:27:59.317 [2024-10-09 08:08:01.247901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.248040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.248062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:59.317 [2024-10-09 08:08:01.248077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:59.317 [2024-10-09 08:08:01.248089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.248174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.248195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.317 [2024-10-09 08:08:01.248209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:59.317 [2024-10-09 08:08:01.248226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.248264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.248280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.317 [2024-10-09 08:08:01.248293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:59.317 [2024-10-09 08:08:01.248304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.248382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.317 [2024-10-09 08:08:01.248405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.248417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.317 [2024-10-09 08:08:01.248429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:59.317 [2024-10-09 08:08:01.248441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.280807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.280884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:59.317 [2024-10-09 08:08:01.280904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.322 ms 00:27:59.317 [2024-10-09 08:08:01.280917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.281060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.317 [2024-10-09 08:08:01.281081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.317 [2024-10-09 08:08:01.281095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:59.317 [2024-10-09 08:08:01.281113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.317 [2024-10-09 08:08:01.282462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2298.449 ms, result 0 00:27:59.317 [2024-10-09 08:08:01.297325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.317 [2024-10-09 08:08:01.313349] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:59.317 [2024-10-09 08:08:01.322381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.576 08:08:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.576 08:08:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:59.576 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:59.576 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:59.576 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:59.834 [2024-10-09 08:08:01.658597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.834 [2024-10-09 08:08:01.658655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:59.834 [2024-10-09 08:08:01.658676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:59.834 [2024-10-09 08:08:01.658689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.834 [2024-10-09 08:08:01.658727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.834 [2024-10-09 08:08:01.658745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:59.834 [2024-10-09 08:08:01.658758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:59.834 [2024-10-09 08:08:01.658770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.834 [2024-10-09 08:08:01.658800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.834 [2024-10-09 08:08:01.658823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:59.834 [2024-10-09 08:08:01.658835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.834 [2024-10-09 08:08:01.658847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.834 [2024-10-09 08:08:01.658927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.323 ms, result 0 00:27:59.834 true 00:27:59.834 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.092 { 00:28:00.092 "name": "ftl", 00:28:00.092 "properties": [ 00:28:00.092 { 00:28:00.092 "name": "superblock_version", 00:28:00.092 "value": 5, 00:28:00.092 "read-only": true 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "name": "base_device", 00:28:00.092 "bands": [ 00:28:00.092 { 00:28:00.092 "id": 0, 00:28:00.092 "state": "CLOSED", 00:28:00.092 "validity": 1.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 1, 00:28:00.092 "state": "CLOSED", 00:28:00.092 "validity": 1.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 2, 00:28:00.092 "state": "CLOSED", 00:28:00.092 "validity": 0.007843137254901933 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 3, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 4, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 5, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 6, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 7, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 8, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 9, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 10, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 11, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 12, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 13, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 14, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 15, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 16, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 17, 00:28:00.092 "state": "FREE", 00:28:00.092 "validity": 0.0 00:28:00.092 } 00:28:00.092 ], 00:28:00.092 "read-only": true 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "name": "cache_device", 00:28:00.092 "type": "bdev", 00:28:00.092 "chunks": [ 00:28:00.092 { 00:28:00.092 "id": 0, 00:28:00.092 "state": "INACTIVE", 00:28:00.092 "utilization": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 1, 00:28:00.092 "state": "OPEN", 00:28:00.092 "utilization": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 2, 00:28:00.092 "state": "OPEN", 00:28:00.092 "utilization": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 3, 00:28:00.092 "state": "FREE", 00:28:00.092 "utilization": 0.0 00:28:00.092 }, 00:28:00.092 { 00:28:00.092 "id": 4, 00:28:00.092 "state": "FREE", 00:28:00.092 "utilization": 0.0 00:28:00.092 } 00:28:00.093 ], 00:28:00.093 "read-only": true 00:28:00.093 }, 00:28:00.093 { 00:28:00.093 "name": "verbose_mode", 00:28:00.093 "value": true, 00:28:00.093 "unit": "", 00:28:00.093 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:00.093 }, 00:28:00.093 { 00:28:00.093 "name": "prep_upgrade_on_shutdown", 00:28:00.093 "value": false, 00:28:00.093 "unit": "", 00:28:00.093 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:00.093 } 00:28:00.093 ] 00:28:00.093 } 00:28:00.093 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:00.093 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:00.093 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.351 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:00.351 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:00.351 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:00.351 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:00.351 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:00.917 Validate MD5 checksum, iteration 1 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:00.917 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.917 [2024-10-09 08:08:02.772078] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:28:00.917 [2024-10-09 08:08:02.772221] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81939 ] 00:28:01.175 [2024-10-09 08:08:02.938157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.175 [2024-10-09 08:08:03.166386] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.077  [2024-10-09T08:08:06.025Z] Copying: 502/1024 [MB] (502 MBps) [2024-10-09T08:08:06.025Z] Copying: 977/1024 [MB] (475 MBps) [2024-10-09T08:08:07.927Z] Copying: 1024/1024 [MB] (average 479 MBps) 00:28:05.915 00:28:05.915 08:08:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:05.915 08:08:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d4eceb69671382d5b78cf74e27910d12 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d4eceb69671382d5b78cf74e27910d12 != \d\4\e\c\e\b\6\9\6\7\1\3\8\2\d\5\b\7\8\c\f\7\4\e\2\7\9\1\0\d\1\2 ]] 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:07.867 Validate MD5 checksum, iteration 2 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:07.867 08:08:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:07.867 [2024-10-09 08:08:09.732897] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:28:07.867 [2024-10-09 08:08:09.733040] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82012 ] 00:28:08.124 [2024-10-09 08:08:09.900525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.382 [2024-10-09 08:08:10.179761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.280  [2024-10-09T08:08:12.858Z] Copying: 412/1024 [MB] (412 MBps) [2024-10-09T08:08:13.486Z] Copying: 847/1024 [MB] (435 MBps) [2024-10-09T08:08:16.017Z] Copying: 1024/1024 [MB] (average 421 MBps) 00:28:14.005 00:28:14.005 08:08:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:14.005 08:08:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:16.540 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:16.540 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7a9dda5ea0a710b4ef357bb84b77d879 00:28:16.540 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7a9dda5ea0a710b4ef357bb84b77d879 != \7\a\9\d\d\a\5\e\a\0\a\7\1\0\b\4\e\f\3\5\7\b\b\8\4\b\7\7\d\8\7\9 ]] 00:28:16.540 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81872 ]] 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81872 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82097 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82097 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82097 ']' 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.541 08:08:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:16.541 [2024-10-09 08:08:18.114368] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:28:16.541 [2024-10-09 08:08:18.114514] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82097 ] 00:28:16.541 [2024-10-09 08:08:18.276317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.541 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81872 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:16.541 [2024-10-09 08:08:18.514495] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.475 [2024-10-09 08:08:19.397006] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:17.475 [2024-10-09 08:08:19.397083] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:17.735 [2024-10-09 08:08:19.544916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.544978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:17.735 [2024-10-09 08:08:19.545003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:17.735 [2024-10-09 08:08:19.545015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.545083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.545102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:17.735 [2024-10-09 08:08:19.545114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:17.735 [2024-10-09 08:08:19.545125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.545170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:17.735 [2024-10-09 08:08:19.546119] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:17.735 [2024-10-09 08:08:19.546163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.546177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:17.735 [2024-10-09 08:08:19.546189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.011 ms 00:28:17.735 [2024-10-09 08:08:19.546206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.546722] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:17.735 [2024-10-09 08:08:19.567007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.567062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:17.735 [2024-10-09 08:08:19.567081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.286 ms 00:28:17.735 [2024-10-09 08:08:19.567101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.579116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.579159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:17.735 [2024-10-09 08:08:19.579175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:17.735 [2024-10-09 08:08:19.579187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.579723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.579763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:17.735 [2024-10-09 08:08:19.579779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.423 ms 00:28:17.735 [2024-10-09 08:08:19.579791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.579862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.579881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:17.735 [2024-10-09 08:08:19.579893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:17.735 [2024-10-09 08:08:19.579904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.579947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.579963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:17.735 [2024-10-09 08:08:19.579985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:17.735 [2024-10-09 08:08:19.580000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.580035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:17.735 [2024-10-09 08:08:19.583935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.583973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:17.735 [2024-10-09 08:08:19.583988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.907 ms 00:28:17.735 [2024-10-09 08:08:19.584000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.584032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.584047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:17.735 [2024-10-09 08:08:19.584060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:17.735 [2024-10-09 08:08:19.584071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.584118] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:17.735 [2024-10-09 08:08:19.584147] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:17.735 [2024-10-09 08:08:19.584192] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:17.735 [2024-10-09 08:08:19.584212] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:17.735 [2024-10-09 08:08:19.584324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:17.735 [2024-10-09 08:08:19.584360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:17.735 [2024-10-09 08:08:19.584375] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:17.735 [2024-10-09 08:08:19.584390] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:17.735 [2024-10-09 08:08:19.584403] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:17.735 [2024-10-09 08:08:19.584416] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:17.735 [2024-10-09 08:08:19.584432] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:17.735 [2024-10-09 08:08:19.584443] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:17.735 [2024-10-09 08:08:19.584453] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:17.735 [2024-10-09 08:08:19.584466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.584477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:17.735 [2024-10-09 08:08:19.584488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:28:17.735 [2024-10-09 08:08:19.584499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.584596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.735 [2024-10-09 08:08:19.584610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:17.735 [2024-10-09 08:08:19.584622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:17.735 [2024-10-09 08:08:19.584637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.735 [2024-10-09 08:08:19.584754] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:17.735 [2024-10-09 08:08:19.584770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:17.735 [2024-10-09 08:08:19.584782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:17.735 [2024-10-09 08:08:19.584794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.735 [2024-10-09 08:08:19.584805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:17.735 [2024-10-09 08:08:19.584815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:17.735 [2024-10-09 08:08:19.584826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:17.736 [2024-10-09 08:08:19.584836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:17.736 [2024-10-09 08:08:19.584846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:17.736 [2024-10-09 08:08:19.584856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.584866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:17.736 [2024-10-09 08:08:19.584877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:17.736 [2024-10-09 08:08:19.584887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.584897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:17.736 [2024-10-09 08:08:19.584907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:17.736 [2024-10-09 08:08:19.584917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.584927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:17.736 [2024-10-09 08:08:19.584937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:17.736 [2024-10-09 08:08:19.584947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.584957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:17.736 [2024-10-09 08:08:19.584968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:17.736 [2024-10-09 08:08:19.584979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.736 [2024-10-09 08:08:19.585003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:17.736 [2024-10-09 08:08:19.585014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:17.736 [2024-10-09 08:08:19.585024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.736 [2024-10-09 08:08:19.585034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:17.736 [2024-10-09 08:08:19.585045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:17.736 [2024-10-09 08:08:19.585055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.736 [2024-10-09 08:08:19.585065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:17.736 [2024-10-09 08:08:19.585075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:17.736 [2024-10-09 08:08:19.585085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.736 [2024-10-09 08:08:19.585095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:17.736 [2024-10-09 08:08:19.585106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:17.736 [2024-10-09 08:08:19.585115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.585126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:17.736 [2024-10-09 08:08:19.585136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:17.736 [2024-10-09 08:08:19.585146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.585156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:17.736 [2024-10-09 08:08:19.585167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:17.736 [2024-10-09 08:08:19.585177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.585187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:17.736 [2024-10-09 08:08:19.585197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:17.736 [2024-10-09 08:08:19.585207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.585217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:17.736 [2024-10-09 08:08:19.585228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:17.736 [2024-10-09 08:08:19.585240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:17.736 [2024-10-09 08:08:19.585250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.736 [2024-10-09 08:08:19.585261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:17.736 [2024-10-09 08:08:19.585272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:17.736 [2024-10-09 08:08:19.585282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:17.736 [2024-10-09 08:08:19.585292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:17.736 [2024-10-09 08:08:19.585302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:17.736 [2024-10-09 08:08:19.585313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:17.736 [2024-10-09 08:08:19.585325] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:17.736 [2024-10-09 08:08:19.585363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:17.736 [2024-10-09 08:08:19.585388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:17.736 [2024-10-09 08:08:19.585421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:17.736 [2024-10-09 08:08:19.585432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:17.736 [2024-10-09 08:08:19.585443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:17.736 [2024-10-09 08:08:19.585454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:17.736 [2024-10-09 08:08:19.585532] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:17.736 [2024-10-09 08:08:19.585545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:17.736 [2024-10-09 08:08:19.585569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:17.736 [2024-10-09 08:08:19.585580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:17.736 [2024-10-09 08:08:19.585591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:17.736 [2024-10-09 08:08:19.585604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.736 [2024-10-09 08:08:19.585616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:17.736 [2024-10-09 08:08:19.585627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.921 ms 00:28:17.736 [2024-10-09 08:08:19.585638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.736 [2024-10-09 08:08:19.616592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.736 [2024-10-09 08:08:19.616648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:17.736 [2024-10-09 08:08:19.616668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.883 ms 00:28:17.736 [2024-10-09 08:08:19.616680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.736 [2024-10-09 08:08:19.616747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.736 [2024-10-09 08:08:19.616763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:17.736 [2024-10-09 08:08:19.616782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:17.736 [2024-10-09 08:08:19.616793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.736 [2024-10-09 08:08:19.663236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.736 [2024-10-09 08:08:19.663294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:17.736 [2024-10-09 08:08:19.663314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.349 ms 00:28:17.736 [2024-10-09 08:08:19.663326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.736 [2024-10-09 08:08:19.663426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.736 [2024-10-09 08:08:19.663444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:17.736 [2024-10-09 08:08:19.663457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:17.736 [2024-10-09 08:08:19.663468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.736 [2024-10-09 08:08:19.663652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.736 [2024-10-09 08:08:19.663672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:17.736 [2024-10-09 08:08:19.663698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:28:17.736 [2024-10-09 08:08:19.663711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.763 [2024-10-09 08:08:19.663778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.763 [2024-10-09 08:08:19.663794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:17.763 [2024-10-09 08:08:19.663807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:17.763 [2024-10-09 08:08:19.663818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.763 [2024-10-09 08:08:19.680801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.763 [2024-10-09 08:08:19.680849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:17.763 [2024-10-09 08:08:19.680866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.954 ms 00:28:17.763 [2024-10-09 08:08:19.680878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.763 [2024-10-09 08:08:19.681041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.763 [2024-10-09 08:08:19.681064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:17.763 [2024-10-09 08:08:19.681078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:17.763 [2024-10-09 08:08:19.681093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.763 [2024-10-09 08:08:19.701467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.763 [2024-10-09 08:08:19.701514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:17.763 [2024-10-09 08:08:19.701531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.342 ms 00:28:17.763 [2024-10-09 08:08:19.701551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.763 [2024-10-09 08:08:19.714146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.763 [2024-10-09 08:08:19.714204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:17.763 [2024-10-09 08:08:19.714222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.670 ms 00:28:17.763 [2024-10-09 08:08:19.714233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.081 [2024-10-09 08:08:19.787242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.081 [2024-10-09 08:08:19.787312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:18.081 [2024-10-09 08:08:19.787343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.924 ms 00:28:18.081 [2024-10-09 08:08:19.787358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.081 [2024-10-09 08:08:19.787585] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:18.081 [2024-10-09 08:08:19.787745] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:18.081 [2024-10-09 08:08:19.787892] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:18.081 [2024-10-09 08:08:19.788016] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:18.081 [2024-10-09 08:08:19.788037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.081 [2024-10-09 08:08:19.788049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:18.081 [2024-10-09 08:08:19.788066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:28:18.081 [2024-10-09 08:08:19.788078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.081 [2024-10-09 08:08:19.788199] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:18.081 [2024-10-09 08:08:19.788224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.081 [2024-10-09 08:08:19.788235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:18.081 [2024-10-09 08:08:19.788247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:18.081 [2024-10-09 08:08:19.788258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.081 [2024-10-09 08:08:19.807370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.081 [2024-10-09 08:08:19.807415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:18.081 [2024-10-09 08:08:19.807433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.074 ms 00:28:18.081 [2024-10-09 08:08:19.807445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.081 [2024-10-09 08:08:19.819270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.081 [2024-10-09 08:08:19.819343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:18.081 [2024-10-09 08:08:19.819368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:18.081 [2024-10-09 08:08:19.819381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.081 [2024-10-09 08:08:19.819507] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:18.081 [2024-10-09 08:08:19.819653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.081 [2024-10-09 08:08:19.819679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:18.081 [2024-10-09 08:08:19.819709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.149 ms 00:28:18.081 [2024-10-09 08:08:19.819721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.339 [2024-10-09 08:08:20.282764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.339 [2024-10-09 08:08:20.282849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:18.339 [2024-10-09 08:08:20.282872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 461.896 ms 00:28:18.339 [2024-10-09 08:08:20.282885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.339 [2024-10-09 08:08:20.287667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.339 [2024-10-09 08:08:20.287724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:18.339 [2024-10-09 08:08:20.287743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.897 ms 00:28:18.339 [2024-10-09 08:08:20.287754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.339 [2024-10-09 08:08:20.288303] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:18.339 [2024-10-09 08:08:20.288351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.339 [2024-10-09 08:08:20.288367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:18.339 [2024-10-09 08:08:20.288381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:28:18.339 [2024-10-09 08:08:20.288394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.339 [2024-10-09 08:08:20.288437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.339 [2024-10-09 08:08:20.288453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:18.339 [2024-10-09 08:08:20.288465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:18.339 [2024-10-09 08:08:20.288476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.339 [2024-10-09 08:08:20.288524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 469.017 ms, result 0 00:28:18.339 [2024-10-09 08:08:20.288580] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:18.339 [2024-10-09 08:08:20.288699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.339 [2024-10-09 08:08:20.288712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:18.339 [2024-10-09 08:08:20.288724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.120 ms 00:28:18.339 [2024-10-09 08:08:20.288735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.759240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.759308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:18.905 [2024-10-09 08:08:20.759345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 469.396 ms 00:28:18.905 [2024-10-09 08:08:20.759369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.764098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.764144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:18.905 [2024-10-09 08:08:20.764161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.826 ms 00:28:18.905 [2024-10-09 08:08:20.764173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.764499] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:18.905 [2024-10-09 08:08:20.764551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.764564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:18.905 [2024-10-09 08:08:20.764576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:28:18.905 [2024-10-09 08:08:20.764588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.764633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.764650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:18.905 [2024-10-09 08:08:20.764662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:18.905 [2024-10-09 08:08:20.764673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.764723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 476.144 ms, result 0 00:28:18.905 [2024-10-09 08:08:20.764777] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:18.905 [2024-10-09 08:08:20.764793] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:18.905 [2024-10-09 08:08:20.764806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.764824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:18.905 [2024-10-09 08:08:20.764836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 945.329 ms 00:28:18.905 [2024-10-09 08:08:20.764848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.764890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.764905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:18.905 [2024-10-09 08:08:20.764917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:18.905 [2024-10-09 08:08:20.764927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.777746] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:18.905 [2024-10-09 08:08:20.777913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.777933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:18.905 [2024-10-09 08:08:20.777947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.954 ms 00:28:18.905 [2024-10-09 08:08:20.777959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.778718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.778749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:18.905 [2024-10-09 08:08:20.778764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:28:18.905 [2024-10-09 08:08:20.778775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.781346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.781379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:18.905 [2024-10-09 08:08:20.781398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.544 ms 00:28:18.905 [2024-10-09 08:08:20.781414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.781475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.781490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:18.905 [2024-10-09 08:08:20.781504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:18.905 [2024-10-09 08:08:20.781514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.781643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.781661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:18.905 [2024-10-09 08:08:20.781673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:18.905 [2024-10-09 08:08:20.781685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.781717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.781731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:18.905 [2024-10-09 08:08:20.781743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:18.905 [2024-10-09 08:08:20.781754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.781795] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:18.905 [2024-10-09 08:08:20.781811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.781826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:18.905 [2024-10-09 08:08:20.781838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:18.905 [2024-10-09 08:08:20.781848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.781913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.905 [2024-10-09 08:08:20.781935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:18.905 [2024-10-09 08:08:20.781947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:18.905 [2024-10-09 08:08:20.781959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.905 [2024-10-09 08:08:20.783111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1237.693 ms, result 0 00:28:18.905 [2024-10-09 08:08:20.798525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.905 [2024-10-09 08:08:20.814535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:18.905 [2024-10-09 08:08:20.823534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.162 Validate MD5 checksum, iteration 1 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:19.162 08:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:19.162 [2024-10-09 08:08:21.036903] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:28:19.162 [2024-10-09 08:08:21.037070] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82137 ] 00:28:19.420 [2024-10-09 08:08:21.229601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.420 [2024-10-09 08:08:21.430073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.321  [2024-10-09T08:08:24.266Z] Copying: 458/1024 [MB] (458 MBps) [2024-10-09T08:08:24.266Z] Copying: 938/1024 [MB] (480 MBps) [2024-10-09T08:08:26.822Z] Copying: 1024/1024 [MB] (average 464 MBps) 00:28:24.810 00:28:24.810 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:24.810 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:26.712 Validate MD5 checksum, iteration 2 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d4eceb69671382d5b78cf74e27910d12 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d4eceb69671382d5b78cf74e27910d12 != \d\4\e\c\e\b\6\9\6\7\1\3\8\2\d\5\b\7\8\c\f\7\4\e\2\7\9\1\0\d\1\2 ]] 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:26.712 08:08:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:26.712 [2024-10-09 08:08:28.679850] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:28:26.712 [2024-10-09 08:08:28.680200] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82212 ] 00:28:26.970 [2024-10-09 08:08:28.840881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.228 [2024-10-09 08:08:29.077237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.154  [2024-10-09T08:08:31.732Z] Copying: 476/1024 [MB] (476 MBps) [2024-10-09T08:08:32.297Z] Copying: 899/1024 [MB] (423 MBps) [2024-10-09T08:08:33.672Z] Copying: 1024/1024 [MB] (average 441 MBps) 00:28:31.660 00:28:31.660 08:08:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:31.660 08:08:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:33.559 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7a9dda5ea0a710b4ef357bb84b77d879 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7a9dda5ea0a710b4ef357bb84b77d879 != \7\a\9\d\d\a\5\e\a\0\a\7\1\0\b\4\e\f\3\5\7\b\b\8\4\b\7\7\d\8\7\9 ]] 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:33.560 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82097 ]] 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82097 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82097 ']' 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 82097 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82097 00:28:33.818 killing process with pid 82097 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82097' 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 82097 00:28:33.818 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 82097 00:28:34.757 [2024-10-09 08:08:36.594649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:34.757 [2024-10-09 08:08:36.612881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.612949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:34.757 [2024-10-09 08:08:36.612976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:34.757 [2024-10-09 08:08:36.612988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.613020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:34.757 [2024-10-09 08:08:36.616562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.616595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:34.757 [2024-10-09 08:08:36.616609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.521 ms 00:28:34.757 [2024-10-09 08:08:36.616619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.616848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.616873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:34.757 [2024-10-09 08:08:36.616885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:28:34.757 [2024-10-09 08:08:36.616896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.618263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.618308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:34.757 [2024-10-09 08:08:36.618325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.323 ms 00:28:34.757 [2024-10-09 08:08:36.618356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.619678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.619740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:34.757 [2024-10-09 08:08:36.619756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.277 ms 00:28:34.757 [2024-10-09 08:08:36.619767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.633055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.633109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:34.757 [2024-10-09 08:08:36.633128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.216 ms 00:28:34.757 [2024-10-09 08:08:36.633140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.640083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.640127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:34.757 [2024-10-09 08:08:36.640144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.897 ms 00:28:34.757 [2024-10-09 08:08:36.640155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.640346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.640391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:34.757 [2024-10-09 08:08:36.640408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:28:34.757 [2024-10-09 08:08:36.640419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.653314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.653539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:34.757 [2024-10-09 08:08:36.653568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.869 ms 00:28:34.757 [2024-10-09 08:08:36.653581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.666549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.666620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:34.757 [2024-10-09 08:08:36.666637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.918 ms 00:28:34.757 [2024-10-09 08:08:36.666648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.679146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.679202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:34.757 [2024-10-09 08:08:36.679217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.457 ms 00:28:34.757 [2024-10-09 08:08:36.679228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.692151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.692204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:34.757 [2024-10-09 08:08:36.692220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.800 ms 00:28:34.757 [2024-10-09 08:08:36.692246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.757 [2024-10-09 08:08:36.692288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:34.757 [2024-10-09 08:08:36.692327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:34.757 [2024-10-09 08:08:36.692342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:34.757 [2024-10-09 08:08:36.692376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:34.757 [2024-10-09 08:08:36.692389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:34.757 [2024-10-09 08:08:36.692570] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:34.757 [2024-10-09 08:08:36.692582] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5c9ffe53-3028-496f-aeb2-381db4ce6ed6 00:28:34.757 [2024-10-09 08:08:36.692594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:34.757 [2024-10-09 08:08:36.692605] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:34.757 [2024-10-09 08:08:36.692624] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:34.757 [2024-10-09 08:08:36.692636] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:34.757 [2024-10-09 08:08:36.692647] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:34.757 [2024-10-09 08:08:36.692658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:34.757 [2024-10-09 08:08:36.692669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:34.757 [2024-10-09 08:08:36.692679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:34.757 [2024-10-09 08:08:36.692689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:34.757 [2024-10-09 08:08:36.692701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.757 [2024-10-09 08:08:36.692713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:34.757 [2024-10-09 08:08:36.692725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:28:34.757 [2024-10-09 08:08:36.692737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.758 [2024-10-09 08:08:36.709895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.758 [2024-10-09 08:08:36.710108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:34.758 [2024-10-09 08:08:36.710137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.114 ms 00:28:34.758 [2024-10-09 08:08:36.710150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.758 [2024-10-09 08:08:36.710673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.758 [2024-10-09 08:08:36.710700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:34.758 [2024-10-09 08:08:36.710714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:28:34.758 [2024-10-09 08:08:36.710724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.758 [2024-10-09 08:08:36.760394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.758 [2024-10-09 08:08:36.760458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:34.758 [2024-10-09 08:08:36.760475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.758 [2024-10-09 08:08:36.760485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.758 [2024-10-09 08:08:36.760546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.758 [2024-10-09 08:08:36.760560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:34.758 [2024-10-09 08:08:36.760571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.758 [2024-10-09 08:08:36.760580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.758 [2024-10-09 08:08:36.760681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.758 [2024-10-09 08:08:36.760705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:34.758 [2024-10-09 08:08:36.760718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.758 [2024-10-09 08:08:36.760728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.758 [2024-10-09 08:08:36.760750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.758 [2024-10-09 08:08:36.760762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:34.758 [2024-10-09 08:08:36.760772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.758 [2024-10-09 08:08:36.760782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.866551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.866823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:35.017 [2024-10-09 08:08:36.866973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.867024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.952181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.952537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:35.017 [2024-10-09 08:08:36.952658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.952707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.953012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.953050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:35.017 [2024-10-09 08:08:36.953076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.953087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.953152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.953175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:35.017 [2024-10-09 08:08:36.953187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.953199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.953358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.953380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:35.017 [2024-10-09 08:08:36.953393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.953410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.953489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.953506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:35.017 [2024-10-09 08:08:36.953533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.953544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.953587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.953601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:35.017 [2024-10-09 08:08:36.953611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.953628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.953676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.017 [2024-10-09 08:08:36.953692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:35.017 [2024-10-09 08:08:36.953703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.017 [2024-10-09 08:08:36.953713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.017 [2024-10-09 08:08:36.953864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 340.933 ms, result 0 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:36.393 Remove shared memory files 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81872 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:36.393 ************************************ 00:28:36.393 END TEST ftl_upgrade_shutdown 00:28:36.393 ************************************ 00:28:36.393 00:28:36.393 real 1m36.568s 00:28:36.393 user 2m20.167s 00:28:36.393 sys 0m22.984s 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:36.393 08:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.393 08:08:38 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:36.393 08:08:38 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:36.393 08:08:38 ftl -- ftl/ftl.sh@14 -- # killprocess 74525 00:28:36.393 08:08:38 ftl -- common/autotest_common.sh@950 -- # '[' -z 74525 ']' 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@954 -- # kill -0 74525 00:28:36.394 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74525) - No such process 00:28:36.394 Process with pid 74525 is not found 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 74525 is not found' 00:28:36.394 08:08:38 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:36.394 08:08:38 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82344 00:28:36.394 08:08:38 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82344 00:28:36.394 08:08:38 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@831 -- # '[' -z 82344 ']' 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.394 08:08:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:36.652 [2024-10-09 08:08:38.425297] Starting SPDK v25.01-pre git sha1 1c2942c86 / DPDK 24.03.0 initialization... 00:28:36.652 [2024-10-09 08:08:38.425491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82344 ] 00:28:36.652 [2024-10-09 08:08:38.594998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.911 [2024-10-09 08:08:38.782643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.847 08:08:39 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.847 08:08:39 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:37.847 08:08:39 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:38.104 nvme0n1 00:28:38.104 08:08:39 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:38.104 08:08:39 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:38.104 08:08:39 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:38.363 08:08:40 ftl -- ftl/common.sh@28 -- # stores=7776f584-b678-425c-a0e0-dab18e19858b 00:28:38.363 08:08:40 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:38.363 08:08:40 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7776f584-b678-425c-a0e0-dab18e19858b 00:28:38.622 08:08:40 ftl -- ftl/ftl.sh@23 -- # killprocess 82344 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@950 -- # '[' -z 82344 ']' 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@954 -- # kill -0 82344 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@955 -- # uname 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82344 00:28:38.622 killing process with pid 82344 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82344' 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@969 -- # kill 82344 00:28:38.622 08:08:40 ftl -- common/autotest_common.sh@974 -- # wait 82344 00:28:41.153 08:08:42 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:41.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:41.153 Waiting for block devices as requested 00:28:41.153 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:41.153 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:41.412 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:41.412 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:46.704 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:46.704 08:08:48 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:46.704 Remove shared memory files 00:28:46.704 08:08:48 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:46.704 08:08:48 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:46.704 08:08:48 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:46.704 08:08:48 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:46.704 08:08:48 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:46.704 08:08:48 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:46.704 00:28:46.704 real 11m46.353s 00:28:46.704 user 14m57.332s 00:28:46.704 sys 1m32.807s 00:28:46.704 08:08:48 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.704 ************************************ 00:28:46.704 08:08:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:46.704 END TEST ftl 00:28:46.704 ************************************ 00:28:46.704 08:08:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:46.704 08:08:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:46.704 08:08:48 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:46.704 08:08:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:46.704 08:08:48 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:46.704 08:08:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:46.704 08:08:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:46.704 08:08:48 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:46.704 08:08:48 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:46.704 08:08:48 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:46.704 08:08:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.704 08:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:46.704 08:08:48 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:46.704 08:08:48 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:46.704 08:08:48 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:46.704 08:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:48.079 INFO: APP EXITING 00:28:48.079 INFO: killing all VMs 00:28:48.079 INFO: killing vhost app 00:28:48.079 INFO: EXIT DONE 00:28:48.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:48.905 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:48.905 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:48.905 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:48.905 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:49.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:49.733 Cleaning 00:28:49.733 Removing: /var/run/dpdk/spdk0/config 00:28:49.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:49.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:49.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:49.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:49.733 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:49.733 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:49.733 Removing: /var/run/dpdk/spdk0 00:28:49.733 Removing: /var/run/dpdk/spdk_pid58060 00:28:49.733 Removing: /var/run/dpdk/spdk_pid58283 00:28:49.733 Removing: /var/run/dpdk/spdk_pid58508 00:28:49.733 Removing: /var/run/dpdk/spdk_pid58616 00:28:49.733 Removing: /var/run/dpdk/spdk_pid58668 00:28:49.733 Removing: /var/run/dpdk/spdk_pid58796 00:28:49.733 Removing: /var/run/dpdk/spdk_pid58825 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59024 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59141 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59248 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59370 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59478 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59522 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59560 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59636 00:28:49.733 Removing: /var/run/dpdk/spdk_pid59753 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60235 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60310 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60386 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60408 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60562 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60584 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60743 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60759 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60823 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60852 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60918 00:28:49.733 Removing: /var/run/dpdk/spdk_pid60936 00:28:49.733 Removing: /var/run/dpdk/spdk_pid61137 00:28:49.733 Removing: /var/run/dpdk/spdk_pid61173 00:28:49.733 Removing: /var/run/dpdk/spdk_pid61267 00:28:49.733 Removing: /var/run/dpdk/spdk_pid61451 00:28:49.733 Removing: /var/run/dpdk/spdk_pid61546 00:28:49.733 Removing: /var/run/dpdk/spdk_pid61599 00:28:49.733 Removing: /var/run/dpdk/spdk_pid62108 00:28:49.733 Removing: /var/run/dpdk/spdk_pid62212 00:28:49.733 Removing: /var/run/dpdk/spdk_pid62332 00:28:49.733 Removing: /var/run/dpdk/spdk_pid62385 00:28:49.733 Removing: /var/run/dpdk/spdk_pid62416 00:28:49.733 Removing: /var/run/dpdk/spdk_pid62500 00:28:49.733 Removing: /var/run/dpdk/spdk_pid63137 00:28:49.733 Removing: /var/run/dpdk/spdk_pid63185 00:28:49.733 Removing: /var/run/dpdk/spdk_pid63710 00:28:49.733 Removing: /var/run/dpdk/spdk_pid63815 00:28:49.733 Removing: /var/run/dpdk/spdk_pid63941 00:28:49.733 Removing: /var/run/dpdk/spdk_pid64000 00:28:49.733 Removing: /var/run/dpdk/spdk_pid64031 00:28:49.733 Removing: /var/run/dpdk/spdk_pid64062 00:28:49.733 Removing: /var/run/dpdk/spdk_pid65949 00:28:49.733 Removing: /var/run/dpdk/spdk_pid66092 00:28:49.733 Removing: /var/run/dpdk/spdk_pid66096 00:28:49.733 Removing: /var/run/dpdk/spdk_pid66113 00:28:49.733 Removing: /var/run/dpdk/spdk_pid66160 00:28:49.733 Removing: /var/run/dpdk/spdk_pid66164 00:28:49.733 Removing: /var/run/dpdk/spdk_pid66176 00:28:49.992 Removing: /var/run/dpdk/spdk_pid66216 00:28:49.992 Removing: /var/run/dpdk/spdk_pid66225 00:28:49.992 Removing: /var/run/dpdk/spdk_pid66237 00:28:49.992 Removing: /var/run/dpdk/spdk_pid66281 00:28:49.992 Removing: /var/run/dpdk/spdk_pid66285 00:28:49.992 Removing: /var/run/dpdk/spdk_pid66297 00:28:49.992 Removing: /var/run/dpdk/spdk_pid67674 00:28:49.992 Removing: /var/run/dpdk/spdk_pid67788 00:28:49.992 Removing: /var/run/dpdk/spdk_pid69211 00:28:49.992 Removing: /var/run/dpdk/spdk_pid70566 00:28:49.992 Removing: /var/run/dpdk/spdk_pid70694 00:28:49.992 Removing: /var/run/dpdk/spdk_pid70821 00:28:49.992 Removing: /var/run/dpdk/spdk_pid70936 00:28:49.992 Removing: /var/run/dpdk/spdk_pid71084 00:28:49.992 Removing: /var/run/dpdk/spdk_pid71164 00:28:49.992 Removing: /var/run/dpdk/spdk_pid71312 00:28:49.992 Removing: /var/run/dpdk/spdk_pid71688 00:28:49.992 Removing: /var/run/dpdk/spdk_pid71726 00:28:49.992 Removing: /var/run/dpdk/spdk_pid72210 00:28:49.992 Removing: /var/run/dpdk/spdk_pid72402 00:28:49.992 Removing: /var/run/dpdk/spdk_pid72505 00:28:49.992 Removing: /var/run/dpdk/spdk_pid72623 00:28:49.992 Removing: /var/run/dpdk/spdk_pid72683 00:28:49.992 Removing: /var/run/dpdk/spdk_pid72714 00:28:49.992 Removing: /var/run/dpdk/spdk_pid72998 00:28:49.992 Removing: /var/run/dpdk/spdk_pid73068 00:28:49.992 Removing: /var/run/dpdk/spdk_pid73158 00:28:49.992 Removing: /var/run/dpdk/spdk_pid73583 00:28:49.992 Removing: /var/run/dpdk/spdk_pid73728 00:28:49.992 Removing: /var/run/dpdk/spdk_pid74525 00:28:49.992 Removing: /var/run/dpdk/spdk_pid74679 00:28:49.992 Removing: /var/run/dpdk/spdk_pid74879 00:28:49.992 Removing: /var/run/dpdk/spdk_pid74982 00:28:49.992 Removing: /var/run/dpdk/spdk_pid75374 00:28:49.992 Removing: /var/run/dpdk/spdk_pid75662 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76015 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76226 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76358 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76422 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76571 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76602 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76670 00:28:49.992 Removing: /var/run/dpdk/spdk_pid76877 00:28:49.992 Removing: /var/run/dpdk/spdk_pid77153 00:28:49.992 Removing: /var/run/dpdk/spdk_pid77559 00:28:49.992 Removing: /var/run/dpdk/spdk_pid78009 00:28:49.992 Removing: /var/run/dpdk/spdk_pid78413 00:28:49.992 Removing: /var/run/dpdk/spdk_pid78922 00:28:49.992 Removing: /var/run/dpdk/spdk_pid79070 00:28:49.992 Removing: /var/run/dpdk/spdk_pid79174 00:28:49.992 Removing: /var/run/dpdk/spdk_pid79839 00:28:49.992 Removing: /var/run/dpdk/spdk_pid79922 00:28:49.992 Removing: /var/run/dpdk/spdk_pid80354 00:28:49.992 Removing: /var/run/dpdk/spdk_pid80767 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81268 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81391 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81444 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81514 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81581 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81647 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81872 00:28:49.992 Removing: /var/run/dpdk/spdk_pid81939 00:28:49.992 Removing: /var/run/dpdk/spdk_pid82012 00:28:49.992 Removing: /var/run/dpdk/spdk_pid82097 00:28:49.992 Removing: /var/run/dpdk/spdk_pid82137 00:28:49.992 Removing: /var/run/dpdk/spdk_pid82212 00:28:49.992 Removing: /var/run/dpdk/spdk_pid82344 00:28:49.992 Clean 00:28:49.992 08:08:51 -- common/autotest_common.sh@1451 -- # return 0 00:28:49.992 08:08:51 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:49.992 08:08:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.992 08:08:51 -- common/autotest_common.sh@10 -- # set +x 00:28:50.251 08:08:52 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:50.251 08:08:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:50.251 08:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:50.251 08:08:52 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:50.251 08:08:52 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:50.251 08:08:52 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:50.251 08:08:52 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:50.251 08:08:52 -- spdk/autotest.sh@394 -- # hostname 00:28:50.251 08:08:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:50.509 geninfo: WARNING: invalid characters removed from testname! 00:29:22.608 08:09:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:24.511 08:09:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:27.796 08:09:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:31.076 08:09:32 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:33.613 08:09:35 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:36.192 08:09:38 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:39.481 08:09:40 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:39.481 08:09:40 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:29:39.481 08:09:40 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:29:39.481 08:09:40 -- common/autotest_common.sh@1681 -- $ lcov --version 00:29:39.481 08:09:41 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:29:39.481 08:09:41 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:29:39.481 08:09:41 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:29:39.481 08:09:41 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:29:39.481 08:09:41 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:39.481 08:09:41 -- scripts/common.sh@336 -- $ read -ra ver1 00:29:39.481 08:09:41 -- scripts/common.sh@337 -- $ IFS=.-: 00:29:39.481 08:09:41 -- scripts/common.sh@337 -- $ read -ra ver2 00:29:39.481 08:09:41 -- scripts/common.sh@338 -- $ local 'op=<' 00:29:39.481 08:09:41 -- scripts/common.sh@340 -- $ ver1_l=2 00:29:39.481 08:09:41 -- scripts/common.sh@341 -- $ ver2_l=1 00:29:39.481 08:09:41 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:29:39.481 08:09:41 -- scripts/common.sh@344 -- $ case "$op" in 00:29:39.481 08:09:41 -- scripts/common.sh@345 -- $ : 1 00:29:39.481 08:09:41 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:29:39.481 08:09:41 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.481 08:09:41 -- scripts/common.sh@365 -- $ decimal 1 00:29:39.481 08:09:41 -- scripts/common.sh@353 -- $ local d=1 00:29:39.481 08:09:41 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:39.481 08:09:41 -- scripts/common.sh@355 -- $ echo 1 00:29:39.481 08:09:41 -- scripts/common.sh@365 -- $ ver1[v]=1 00:29:39.481 08:09:41 -- scripts/common.sh@366 -- $ decimal 2 00:29:39.481 08:09:41 -- scripts/common.sh@353 -- $ local d=2 00:29:39.481 08:09:41 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:39.481 08:09:41 -- scripts/common.sh@355 -- $ echo 2 00:29:39.481 08:09:41 -- scripts/common.sh@366 -- $ ver2[v]=2 00:29:39.481 08:09:41 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:29:39.481 08:09:41 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:29:39.481 08:09:41 -- scripts/common.sh@368 -- $ return 0 00:29:39.481 08:09:41 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.481 08:09:41 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:29:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.481 --rc genhtml_branch_coverage=1 00:29:39.481 --rc genhtml_function_coverage=1 00:29:39.481 --rc genhtml_legend=1 00:29:39.481 --rc geninfo_all_blocks=1 00:29:39.481 --rc geninfo_unexecuted_blocks=1 00:29:39.481 00:29:39.481 ' 00:29:39.481 08:09:41 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:29:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.481 --rc genhtml_branch_coverage=1 00:29:39.481 --rc genhtml_function_coverage=1 00:29:39.481 --rc genhtml_legend=1 00:29:39.481 --rc geninfo_all_blocks=1 00:29:39.481 --rc geninfo_unexecuted_blocks=1 00:29:39.481 00:29:39.481 ' 00:29:39.481 08:09:41 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:29:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.481 --rc genhtml_branch_coverage=1 00:29:39.481 --rc genhtml_function_coverage=1 00:29:39.481 --rc genhtml_legend=1 00:29:39.481 --rc geninfo_all_blocks=1 00:29:39.481 --rc geninfo_unexecuted_blocks=1 00:29:39.481 00:29:39.481 ' 00:29:39.481 08:09:41 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:29:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.481 --rc genhtml_branch_coverage=1 00:29:39.481 --rc genhtml_function_coverage=1 00:29:39.481 --rc genhtml_legend=1 00:29:39.482 --rc geninfo_all_blocks=1 00:29:39.482 --rc geninfo_unexecuted_blocks=1 00:29:39.482 00:29:39.482 ' 00:29:39.482 08:09:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:39.482 08:09:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:39.482 08:09:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:39.482 08:09:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.482 08:09:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.482 08:09:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.482 08:09:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.482 08:09:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.482 08:09:41 -- paths/export.sh@5 -- $ export PATH 00:29:39.482 08:09:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.482 08:09:41 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:39.482 08:09:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:29:39.482 08:09:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728461381.XXXXXX 00:29:39.482 08:09:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728461381.9cxpH7 00:29:39.482 08:09:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:29:39.482 08:09:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:29:39.482 08:09:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:39.482 08:09:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:39.482 08:09:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:39.482 08:09:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:29:39.482 08:09:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:29:39.482 08:09:41 -- common/autotest_common.sh@10 -- $ set +x 00:29:39.482 08:09:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:39.482 08:09:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:29:39.482 08:09:41 -- pm/common@17 -- $ local monitor 00:29:39.482 08:09:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:39.482 08:09:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:39.482 08:09:41 -- pm/common@25 -- $ sleep 1 00:29:39.482 08:09:41 -- pm/common@21 -- $ date +%s 00:29:39.482 08:09:41 -- pm/common@21 -- $ date +%s 00:29:39.482 08:09:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728461381 00:29:39.482 08:09:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728461381 00:29:39.482 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728461381_collect-cpu-load.pm.log 00:29:39.482 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728461381_collect-vmstat.pm.log 00:29:40.049 08:09:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:29:40.049 08:09:42 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:29:40.049 08:09:42 -- spdk/autopackage.sh@14 -- $ timing_finish 00:29:40.049 08:09:42 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:40.049 08:09:42 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:40.049 08:09:42 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:40.308 08:09:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:40.308 08:09:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:40.308 08:09:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:40.308 08:09:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:40.308 08:09:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:40.308 08:09:42 -- pm/common@44 -- $ pid=84081 00:29:40.308 08:09:42 -- pm/common@50 -- $ kill -TERM 84081 00:29:40.308 08:09:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:40.308 08:09:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:40.308 08:09:42 -- pm/common@44 -- $ pid=84082 00:29:40.308 08:09:42 -- pm/common@50 -- $ kill -TERM 84082 00:29:40.308 + [[ -n 5299 ]] 00:29:40.308 + sudo kill 5299 00:29:40.317 [Pipeline] } 00:29:40.333 [Pipeline] // timeout 00:29:40.338 [Pipeline] } 00:29:40.351 [Pipeline] // stage 00:29:40.356 [Pipeline] } 00:29:40.370 [Pipeline] // catchError 00:29:40.379 [Pipeline] stage 00:29:40.381 [Pipeline] { (Stop VM) 00:29:40.393 [Pipeline] sh 00:29:40.672 + vagrant halt 00:29:44.859 ==> default: Halting domain... 00:29:50.138 [Pipeline] sh 00:29:50.419 + vagrant destroy -f 00:29:54.625 ==> default: Removing domain... 00:29:54.637 [Pipeline] sh 00:29:54.917 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:54.926 [Pipeline] } 00:29:54.940 [Pipeline] // stage 00:29:54.946 [Pipeline] } 00:29:54.960 [Pipeline] // dir 00:29:54.965 [Pipeline] } 00:29:54.980 [Pipeline] // wrap 00:29:54.986 [Pipeline] } 00:29:54.998 [Pipeline] // catchError 00:29:55.008 [Pipeline] stage 00:29:55.012 [Pipeline] { (Epilogue) 00:29:55.025 [Pipeline] sh 00:29:55.307 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:03.435 [Pipeline] catchError 00:30:03.437 [Pipeline] { 00:30:03.451 [Pipeline] sh 00:30:03.787 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:03.787 Artifacts sizes are good 00:30:03.796 [Pipeline] } 00:30:03.810 [Pipeline] // catchError 00:30:03.822 [Pipeline] archiveArtifacts 00:30:03.830 Archiving artifacts 00:30:03.937 [Pipeline] cleanWs 00:30:03.948 [WS-CLEANUP] Deleting project workspace... 00:30:03.948 [WS-CLEANUP] Deferred wipeout is used... 00:30:03.954 [WS-CLEANUP] done 00:30:03.956 [Pipeline] } 00:30:03.971 [Pipeline] // stage 00:30:03.976 [Pipeline] } 00:30:03.990 [Pipeline] // node 00:30:03.995 [Pipeline] End of Pipeline 00:30:04.033 Finished: SUCCESS